Milestones of Science
Milestones of Science
of Science
How We Came to
Understand the Universe
James D. Stein
An imprint of Globe Pequot, the trade division of
The Rowman & Littlefield Publishing Group, Inc.
4501 Forbes Boulevard, Suite 200, Lanham, Maryland 20706
www.rowman.com
All rights reserved. No part of this book may be reproduced in any form or by any electronic
or mechanical means, including information storage and retrieval systems, without written
permission from the publisher, except by a reviewer who may quote passages in a review.
The paper used in this publication meets the minimum requirements of American
National Standard for Information Sciences—Permanence of Paper
for Printed Library Materials, ANSI/NISO Z39.48-1992.
Contents
Scientific Timeline v
Introduction xiii
1 Astronomy 1
2 The Earth 27
3 Chemistry 43
4 Matter 65
5 Forces and Energy 89
6 Life 113
7 Genetics and DNA 133
8 The Human Body 147
9 Disease 161
10 Science in the Twenty-First Century 179
Bibliography 183
Acknowledgments 185
Index 187
iii
Scientific Timeline
v
vi T H E M I L E S T O N E S O F S C I E N C E
CONNECTIONS
Many years ago, when I was first appointed to the faculty at California State
University, Long Beach, I had an interview with the provost, a man of sub-
stantial erudition who felt that it was advisable to get to know the people
whom the university was hiring. We talked for an hour, and one of the
questions he asked was, “If you could take only one book to a desert island,
which one would it be?”
I had just finished reading James Burke’s Connections, and even though
it had the advantage of temporal proximity to the asking of the question, I
thought—and still think—that it would be my choice. For those who have
not read the book, you have a treat in store for you. Burke managed to in-
tegrate science, history, biography, and technology into ten stories, each of
which culminated in one of the seminal technological developments of the
twentieth century, such as the computer.
I have used Connections as the model for the structure of this book. In-
stead of the stories culminating in a particular development, as did Burke’s,
each story is organized around a major scientific theme, such as the story
of genetics and DNA. Burke, however, either knew or was able to research
a good deal of the historical and political background that surrounded the
events that he used to tell a story. My goal is somewhat different: to tell the
story of a major scientific theme through its milestones—the theories, obser-
I ntroduction xv
vations, and experiments that are the milestones themselves, and the people
who were primarily responsible.
One thing that Burke made clear is that his choices were idiosyncratic—
he mentioned that if he were again to write the story of the connections that
led to the development of the computer, he might tell it in a completely dif-
ferent way. I don’t feel that I possess Burke’s exquisite sensibility for making
those choices—but fortunately, I don’t have to. Yes, I may err in omitting a
great theory or experiment, or in including ones of lesser importance, but if
I make sure to include the theories of Newton and Einstein in telling the tale
of our investigation of the phenomenon of gravity, and include some of the
important observations and experiments that went into the construction and
verification of those theories, I won’t have gone too far wrong.
Finally, Burke told his stories in chronological order. Burke wove history
and technology together—sometimes history influenced technology, some-
times technology influenced history, but one could always see development
and progress. One of the features that made Connections such an enjoyable
read is that Burke had his choice of some of history’s greatest personalities to
insert into his stories. Most of the characters who appear in this book are—
not surprisingly—scientists, but many of the people who appear in this book
led lives every bit as interesting as the political figures and celebrities whose
lives are often better known.
on the foundation, and a faulty foundation can easily lead to the collapse of
the entire edifice.
Third, a critical step on the road to determining the way things really are
is to determine the way things really aren’t. The history of science includes
numerous examples of scientists—and often great scientists—who look at
the available evidence and piece them together into an erroneous theory.
The discovery that a theory is wrong can be the critical step in determining
how things really are. Sometimes a great observation or experiment decides
between two competing theories, but it can also simply sound the death knell
for an erroneous theory, clearing the way for a correct theory to rise from
the ashes.
ELEGANCE
I like how Carl Sagan described the Cosmos; he said that the Cosmos was all
that was, is, or ever will be. It is a measure of the greatness of homo sapiens
that in a few hundred years, located on a small planet circling an undistin-
guished star in the outlying region of the Milky Way galaxy, we have learned
so much about our environment and ourselves. It is my hope that I tell this
tale well enough that, in the case that we can only leave three books to future
generations, this book may be thought worthy of accompanying The Hand-
book of Chemistry and Physics and Machinery’s Handbook.
CHAPTER 1
Astronomy
website, where I found a directory of all solar and lunar eclipses visible from
New York from 1 CE to 3000 CE (which means Common Era, which has
largely replaced AD in scientific texts). The fact that there is such a website
is a tribute to both technology and science. I was born in an era in which
my family had an encyclopedia as an immediate access to information, and
if we needed more detailed information, we went to the library. I am still
astounded by the volume and depth of information available through search
engines from the comfort of one’s home—and as will be discussed in a later
chapter, the internet is reshaping not only what we know, but how we ac-
quire that knowledge.
My father and I went out to the backyard a little after five o’clock and
spent two glorious hours watching one of Nature’s most spectacular displays.
I remember that when we went back in for dinner, I asked my father how
they knew when the eclipse would start and when it would end. My father
told me that scientists study such things, and that’s when I made a career
change.
Until that day, my preferred career was a professional baseball player—
despite the fact that I had evidenced nothing in the way of ability in this
area. But the Yankees would have to find someone other than me to replace
Joe DiMaggio in center field, as I was so enthralled by the ability to predict
the exact timing of something so impressive as a solar eclipse that I decided
to become a scientist.
theorems by formal arguments. Among his best-known proofs are that the
diameter of a circle divides it into two equal parts, and that the base angles
of an isosceles triangle are equal.
The prototype Greek intellectual, Thales was the first to blend astron-
omy and philosophy into the subject that is now called cosmology. He is
the first person known to have asked the question, “Of what is the Universe
made?,” and to answer it without invoking elephants on the backs of turtles,
or other mystical phenomena. Thales’s answer, that the Universe was an infi-
nite ocean in which the Earth floated as a flat disk, is obviously incorrect, but
it is a fact that he asked the question and answered it in nonmythical terms,
and that clearly marks him as a scientist.
Thales’s greatest achievement, however, is the first accurate prediction
of a solar eclipse. Nowadays when a solar eclipse is due, the news will be all
over the internet, and the chances are extremely good that you can get a live
video feed—especially if it’s a total eclipse. Thales merely predicted that a
solar eclipse would occur in the year 585 BCE. Ballpark estimates are a le-
gitimate part of science, even if in this case the ballpark was pretty large. We
can be certain of the date of May 28 because we are now able to determine
that the only solar eclipse that occurred that year in that portion of the world
happened on May 28.
It is known that the Babylonians were able to accurately predict lunar
eclipses two centuries prior to Thales, but lunar eclipses occur much more
frequently and the periodicity is easier to determine. Thales’s prediction
was historically as well scientifically significant, because the Medes and the
Lydians were about to go to war. When the eclipse occurred, it was taken as
a sign that the gods would not look favorably upon the war. The two sides
signed a peace treaty and went home. The fact that the solar eclipse hap-
pened on May 28 makes the decision not to go to war the first historical
event that can be accurately dated.
Thales was also the world’s first all-around intellectual, combining his
scientific inclination with an interest in philosophy and, perhaps surprisingly
for a scientist, politics. Thales urged the Greek city-states to unite in order
to defend themselves against Lydia. The Greeks, who were attracted to the
number seven (the seven wonders of the ancient world), were later to con-
struct lists of the seven wise men of ancient Greece. Thales was invariably
placed first.
Thales was probably not the first person to be asked, “If you’re so
smart, why ain’t you rich?” but he is the first one recorded to devise a bril-
liant retort. It is said that by his study of the weather he knew that the next
olive crop would be a good one. He then purchased options on all the olive
4 T H E M I L E S T O N E S O F S C I E N C E
presses, and when the olive crop indeed proved bountiful, was able to obtain
enough money by renting his presses to live comfortably for the remainder of
his life. Having made his point (and his fortune), he turned again to science,
philosophy, and politics.
Slightly more than five hundred years after the birth of Christ, Rome fell
to the invading barbarians, beginning a thousand-year period known as the
Dark Ages. One of the major factors in making the Dark Ages dark was an
almost complete lack of scientific progress. A “media blitz” during the Dark
Ages put forth the prevailing view that all the great questions had been an-
swered: the philosophical ones by the Church, and the secular ones by the
ancient authorities, such as Ptolemy and Aristotle. What questions arose that
could not be answered were viewed not as puzzles to be solved, but as myster-
ies into which it would be blasphemous to delve.
One of the central problems was the geometry of the Universe. Since
Christ had lived on Earth, it was obvious that the Earth was the center of
the Universe. In the geocentric theory, the Sun, Moon, and planets revolved
in circular orbits around the Earth, and the stars belonged to a great fixed
sphere.
One obvious difficulty with this theory was that, from time to time,
Mars, Jupiter, and Saturn would reverse their direction of motion in the
night sky. This obvious discrepancy had to be patched up, and so the geocen-
tric theory was modified by Ptolemy, who introduced the theory of epicycles,
which assumed that these planets described circular loops within their basic
circular orbits, somewhat akin to a person on the edge of a merry-go-round
walking around in a small circle near the edge. The beautiful basic geocentric
theory, which used simple circles to describe the motions of the planets, was
now somewhat untidy.
It was easy to understand why, if the Almighty had placed the Earth at
the center of the Universe, He had designed the Sun, Moon, and planets
to rotate around it in perfect circles. Why the Almighty felt it necessary to
use epicycles bothered some curious individuals, especially those who had
been exposed to the philosophical principle known as Occam’s razor, which
asserts that the simplest explanation is usually the correct one. In 1514.
Nicolaus Copernicus produced a small handwritten volume, entitled Little
Commentary, in which the author was not named—but Copernicus sent it
out to some of his friends. In it he stated seven axioms, three of which were
A stronomy 5
pivotal in the heliocentric theory. These were that the center of the Universe
was near the Sun, the annual cycle of the seasons was produced by the Earth
revolving around the Sun, and that the epicycles were an artifact of viewing
these motions from the Earth. In 1543, near the end of his life, he decided
to publish these conjectures.
This theory was greeted with a literal firestorm of opposition from the
Roman Catholic Church—its adherents sometimes being burned at the
stake. Nonetheless, it made some important converts, one of whom was
Tycho Brahe, a Danish nobleman who may be said to be the founder of
observational astronomy. Brahe devoted much of his life to the problem
of accurate astronomical measurements. He persuaded Frederick II, the king
of Denmark, to underwrite the construction of an astronomical observatory,
thus beginning a tradition of obtaining government grants for the study of
pure science.
One other convert to the Copernican theory was Johannes Kepler,
whose belief in the theory arose from a curious mixture of science, religion,
astrology, and numerology. In order to accurately determine the orbits of the
planets, Kepler spent more than twenty years refining Brahe’s measurements.
Kepler labored diligently to fit this data into a geometric scheme in which the
circular orbits were determined by inscribing spheres inside the five Platonic
solids (tetrahedron, cube, octahedron, dodecahedron, and icosahedron).
Try as he might, Kepler could not get the data to fit his hypothesis. He
then made one of the great scientific decisions of all time. Rather than try
to hammer data into a hypothesis, he abandoned his hypothesis to see if he
could find one that fit the data. The result was Kepler’s three laws of plan-
etary motion, the first of which was that the orbit of a planet is an ellipse with
the Sun at one of the foci. This result was one of the pivotal checks on the
correctness of Newton’s law of gravitation, from which all of Kepler’s laws
of planetary motion could be deduced.
Copernicus, Brahe, and Kepler came from entirely different back-
grounds. Copernicus was actually a junior executive within the hierarchy of
the Church, holding a position that today might be referred to as a deputy
comptroller. His epic work might never have been published had he not
been strongly encouraged to do so by Georg Rheticus, a young professor of
mathematics and astronomy who lived with Copernicus for two years, and
urged him to share his thoughts with the world.
Brahe was a playboy who literally partied himself to death. At a party
in which he had imbibed a considerable quantity of wine, he followed the
practice of the times and did not leave the table until his host did. As a result
of the failure to relieve the pressure on his bladder, he became unable to
6 T H E M I L E S T O N E S O F S C I E N C E
urinate, and the resulting buildup of toxins within his system caused his
death some eleven days later. Let it not be said that there is nothing practical
to be learned from a history of science.
Kepler was a professor of mathematics who accepted court appointments
in mathematics and astronomy, which would enable him to collect addi-
tional orbital data. One of Kepler’s notable achievements was successfully
defending his mother on a charge of witchcraft—the charge was dismissed
on a technicality because the prosecution failed to follow the appropriate
legal procedures with regard to torture. Sadly, echoes of this barbaric practice
can still be found in a number of legal systems in the Western world. Kepler
could also lay claim to the title of father of science fiction, as he wrote a book
called The Dream in which he imagined voyages to other worlds than Earth.
investigation. This point of view is responsible for many of the most impor-
tant advances of Western civilization.
Like many brilliant and creative individuals, Newton was beset with psy-
chological problems. He was undoubtedly paranoid—fearful of criticism, he
refused to publish many of his discoveries. His articles on calculus, a supreme
mathematical tool, only came to light when the German philosopher Gott-
fried Leibniz, who independently invented calculus more than ten years after
Newton had done so, announced his discoveries. The seeds of the Principia,
which contained his ideas on universal gravitation, remained in his desk for
twenty years. Only the impassioned pleas of his good friend, astronomer
Edmund Halley (of Halley’s Comet fame), persuaded Newton to publish it,
and Halley himself had to finance the initial printing. History repeated itself;
recall that Copernicus might not have published his heliocentric theory had
not a good friend urged him to do so.
Back in 1999, Time magazine nominated Albert Einstein as its Man of
the Century. I think they missed an opportunity to nominate a Man of the
Millennium—and if they had, I would have started a write-in campaign for
Newton.
I’ve always loved what the poet Wordsworth wrote, on seeing a bust of
Newton.
We can be pretty sure that even before recorded history, the objects that
appeared in the sky fascinated those who observed them. The Greeks were
certainly among the first to make charts of the location of the stars, although
there is evidence of other cultures also making such records. In doing so,
the Greeks observed that certain luminous objects, which they called planets
(from the Greek word for wanderer), moved rapidly among the stars. Five
planets—Mercury, Venus, Mars, Jupiter, and Saturn—were visible to the
naked eye.
The invention of the telescope accelerated the growth of observational
astronomy. Two of the finest telescope makers, and consequently two of the
finest astronomers, were the Englishman William Herschel and his sister
Caroline. William was a superb lens grinder, and while he was at work, Caro-
line (who would become the first great woman astronomer) read aloud to
10 T H E M I L E S T O N E S O F S C I E N C E
him and fed him so that he could continue working. In 1781, he came across
an object in the sky that he had never seen. Because it formed a visible disc
in the telescope, rather than the mere point that a star would make, Herschel
at first concluded that he had discovered a comet. Subsequent observations
showed that the disc had a sharp edge like a planet, rather than the fuzzy edge
characteristic of comets. When he had obtained enough data to calculate
the orbit, he found that it was nearly circular, like a planet, rather than the
elongated ellipse of a comet. The conclusion was inescapable: he had indeed
discovered a planet, which was later named Uranus.
Its orbit was carefully determined according to the laws of Newton’s
law of gravitation. After several decades, though, discrepancies began to
appear between where the planet was supposed to be, and where it actually
was. These discrepancies were noted by the Astronomer Royal, Sir George
Airy, who believed that they were due to imperfections in Newton’s law. As
a result, when he received a paper in 1845 by a Cambridge undergraduate
named John Couch Adams concerning the orbit of Uranus, he paid it no
attention.
History was to show that this was a mammoth error on the part of Airy.
Adams, who was compelled to tutor at Cambridge in order to earn money
for his tuition, had spent his vacation working on a radical theory: the orbit
of Uranus was deviating from the calculated path because of the influence
of an undiscovered planet. Adams had worked out the mass and location
that this undiscovered planet must have had in order to cause the observed
changes.
Adams was not alone. The Frenchman Urbain Le Verrier, working on
the same hypothesis, also deduced the mass and location of the planet. Le
Verrier, however, had luck on his side. Unlike Adams, he was an established
astronomer. When Johann Galle of the Berlin Observatory sent him some
preprints, Le Verrier wrote back to thank him and suggest that he look at
a particular region of the sky. Le Verrier’s luck continued, as Galle had just
been sent new and improved maps of the area in which Le Verrier was inter-
ested. As a result, Galle became the first individual to see the planet Neptune.
The discovery of Neptune was a theoretical tour de force, and estab-
lished beyond any possible doubt the validity of Newton’s law of gravitation
(although not much doubt existed at the time). Both Le Verrier and Adams
went on to have distinguished careers as astronomers, but the discovery of
Neptune was the high point for each.
The discovery of Neptune was not Le Verrier’s first attempt at finding
an unknown planet. Before tackling the problem of the discrepancies in the
orbit of Uranus, he had noticed subtle anomalies in the orbit of Mercury,
A stronomy 11
and attempted to account for them the same way, by postulating a planet
he referred to as Vulcan (probably because it was even closer to the sun than
Mercury, and would therefore have been hotter than Vulcan’s mythical
forge). No such planet was ever found, and the problems with anomalies
in Mercury’s orbit nagged astronomers until early in the twentieth century.
The fault was not with Le Verrier’s calculation, but with Newton’s law of
gravitation—as a young German-born physicist was to show in the twentieth
century.
In 1905, Albert Einstein had probably the most astounding year any scientist
has ever experienced. Upon finishing his doctorate, he found himself unable
to find an appropriate job in the academic world, so he took a job with the
Swiss Patent Office in Berne. During the day, he was a civil servant, scru-
tinizing patent applications for such ordinary devices as an improved gun,
and a new use for alternating current. During the evening, he would some-
times stop at the nearby Cafe Bollwerk for a cup of coffee and conversation.
Somehow, amid the responsibilities of a nine-to-five job, he managed to
write three papers. Two of these papers led directly to Nobel Prizes; the third
paper was merely brilliant.
Einstein is best known for the theory of relativity, a brilliant restructur-
ing of how gravity operates. When Einstein first started thinking about the
ideas that were to lead to the theory of relativity, he performed a famous
thought experiment. Suppose that, at the exact moment that a clock struck
noon, you were able to ride away from the clock on a beam of light. Because
light itself carries the information from the clock about what time the clock
shows, you would forever see the clock as showing noon. Time stands still if
you happen to be traveling at the speed of light. The corollary is that time
itself is not absolute, but depends on the observer.
The relativity of time appeared in Einstein’s 1905 paper, “On the Elec-
trodynamics of Moving Bodies,” which lay at the core of the theory of special
relativity. But Einstein was to go further. A decade of work culminated in
the theory of general relativity, Einstein’s reformulation of the Newtonian
Universe.
In Newton’s Universe, space and time are independent and absolute
quantities, which can be measured by any observer. At any given moment,
the Universe is a snapshot of a spatial stage, and the masses are the actors
12 T H E M I L E S T O N E S O F S C I E N C E
moving about this stage in relation to one another. The Universe according
to Newton is an unfolding motion picture.
In Einstein’s Universe, space and time are interlinked to form a four-
dimensional geometrical structure called spacetime. The shape of spacetime
determines how objects move; conversely, the objects themselves determine
the shape of spacetime. The Universe according to Einstein is a geometrical
entity in which space, time, and objects are all indissolubly related to one
another. The differences between the Universes of Newton and Einstein are
not apparent in everyday life, manifesting themselves mostly when objects
travel at very high speed or are exceptionally massive. The first demonstra-
tion of the correctness of Einstein’s reformulation was made in 1919, when
Sir Arthur Eddington led an expedition to Africa to view a total eclipse of the
Sun. Only then would it be possible to make the critical measurements of the
motion of the planet Mercury to decide whether Einstein had supplied the
answer to a discrepancy that had appeared in Newton’s theory. In extreme
situations such as described previously, Einstein’s theory was proven correct,
and since then Einstein’s theory has passed every test with flying colors.
Einstein was unquestionably the most famous scientist who ever lived,
and arguably the most brilliant. Unlike the moody and paranoid Newton
and the gloomy and pessimistic Darwin, he was a warm and charming
humanitarian. Unlike many celebrities, he was aware of both his strengths
and limitations. As one of the leading spokesmen of the Zionist cause, and
certainly the most famous, he was the first to be offered the presidency of
Israel when that nation came into being. He turned it down, saying that he
had no great understanding of human problems.
Einstein was well known for his proclivity to phrase statements about
the Universe by ascribing various points of view to God. When he was asked
in 1919 how he would have felt if the measurements made during the total
eclipse had not confirmed his predictions, he replied that he would have
criticized God for a bad job in designing the Universe. Einstein’s view of
the probabilistically based subject of quantum mechanics is certainly best
summarized in his well-known quote that “God does not play dice with the
Universe.” Finally, his good friend Niels Bohr, with whom he used to take
long walks and discuss physics, grew rather tired of these pronouncements,
and told him, “Stop telling God what to do.” It took someone of the genius
of Bohr to achieve a put-down of Einstein. Nonetheless, the feeling that the
scientific community has toward Einstein may have been best expressed by
Jacob Bronowski in The Ascent of Man: “Einstein was a man who could ask
immensely simple questions. And what his life showed, and his work, is that
when the answers are simple too, then you hear God thinking.”
A stronomy 13
Stars
THE PERIOD-LUMINOSITY CURVE
OF THE CEPHEID VARIABLES
Among the questions that have undoubtedly been asked at all times and in
all cultures is, “How big is the Universe?”
By the beginning of the nineteenth century, the construction of tele-
scopes had improved to the point where it was actually possible to detect
parallax, which is a type of relative motion of nearby objects against a fixed
background. You can experience parallax for yourself if you hold a finger in
front of your nose, and then look at the finger with one eye closed, then the
other—the background shifts relative to your finger. Using this technique,
Friedrich Bessel concluded in 1838 that the distance of the star 61 Cygni was
more than six light-years from Earth. This discovery enlarged the Universe
substantially, as even so great an authority as Newton had estimated that the
stars were no more than two light-years from Earth.
Throughout the remainder of the nineteenth century, the measurement
of parallax was the leading-edge technique for determining distances to the
stars. Telescopes became even more powerful, and smaller parallaxes could
be detected. This pushed the threshold of the furthest measurable stars to
several hundred light-years. Because the distance to most stars could not
be measured, astronomers naturally suspected that the Universe was much
larger, but how much larger was still anybody’s guess.
In the first two decades of the twentieth century, several different events
combined to make possible more accurate measurements of the size of the
Universe. The first was the definition by Ejnar Hertzsprung, a Danish as-
tronomer, of the concept of absolute magnitude of a star. Previously, the
magnitude of a star (now called the apparent magnitude) was a measure of
how bright the star appeared. This is a function of the star’s intrinsic bright-
ness and the distance of the star from Earth. Hertzsprung suggested that
one could compute how bright a star would appear if it were at a standard
distance. As a result, there was a simple equation connecting three numbers:
absolute magnitude (Hertzsprung’s number), apparent magnitude, and dis-
tance from Earth.
The distance of a star could be computed if both the apparent magni-
tude and the absolute magnitude of a star were known. Astronomers had
been measuring apparent magnitude for years, but the difficulty lay in com-
puting the absolute magnitude of a star. When astronomers looked at a dim
14 T H E M I L E S T O N E S O F S C I E N C E
star, there was no way to tell if they were looking at a dim nearby star, or an
extremely bright one dimmed by distance.
Henrietta Swan Leavitt was an astronomer working at Harvard Univer-
sity. She was particularly interested in a class of stars called Cepheid variables.
These stars, originally discovered in the constellation of Cepheus, brightened
and dimmed in an extremely regular fashion. Leavitt’s great discovery was
that there was a mathematical relationship between the absolute magnitude
of a Cepheid variable and the length of the period of that variable—how
long it took to go from brightest to dimmest and back to brightest again.
These periods were obviously easy to measure; one simply timed them. The
distances of several Cepheid variables had actually been computed by then,
and so it was possible to use these measurements to calibrate the Cepheid
yardstick. Several years later Cepheid variables were discovered in collections
of stars that would later be known as galaxies, and for the first time it was
realized that the Universe was at least millions of light-years in diameter.
The Cepheid variable yardstick is still the only one whose accuracy is
acknowledged by the astronomical community as a whole. One cannot use
this technique for distant galaxies, though, as it is impossible to make out
individual Cepheid variables in such galaxies. One of the latest proposals is
to update Leavitt’s period-luminosity law for Cepheid variables by trying
to find a different observable type of star for which there is a correlation
between brightness and distance. The current candidate is the Type Ia super-
nova, which is visible over huge distances. Astronomers are currently trying
to work out a law for Type Ia supernovas analogous to the period-luminosity
law for Cepheid variables. If such a law exists, it would bring a realistic de-
termination of the size of the Universe within reach.
Leavitt’s life serves as a good indicator of the second-class status (if that)
held by women during her lifetime. She eventually graduated from what is
currently known as Radcliffe College with a certificate that stated had she
been a man, she would have received a Bachelor of Arts degree. Harlow
Shapley, who was one of the leading astronomers of the times, wrote to
her superior at Harvard Observatory that, “Her discovery of the relation of
period to brightness is destined to be one of the most significant results of
stellar astronomy, I believe.”
Leavitt died in 1921, but despite her contributions, the news of her
passing was not known to a number of prestigious scientists, including Gösta
Mittag-Leffler, who wrote a letter to her in 1925 in which he stated,
It’s hard to believe that the death of someone whose work was held in such
esteem would have gone unnoticed—and, in keeping with her graduation
certificate, that death probably would not have gone unnoticed had Leavitt
been a man.
How does the Sun supply the light and heat necessary for life on Earth? With
the discovery of the laws of thermodynamics, it was quickly seen that chemi-
cal burning, such as takes place in coal, was far too inefficient. In 1854, the
German physicist Hermann von Helmholtz considered a subtler mechanism
for producing heat: gravitation. The kinetic energy of particles falling toward
the center of the Sun could be converted to radiation in accordance with the
laws of thermodynamics. This would power the Sun for 25 million years.
Unfortunately, geologists had supplied convincing evidence that the
Earth was at least hundreds of millions of years old, and so gravitational
conversion of mechanical energy to heat radiation was clearly not the answer.
The problem would remain unsolved until the start of the twentieth century,
when additional sources of energy were discovered within the heart of the
atom itself. Einstein’s famous formula E = mc2 demonstrated that the Sun
clearly had more than enough mass to generate energy for billions of years,
providing that a mechanism to convert energy with sufficient efficiency
could be found.
The individual who figured out the conversion technique was Hans
Bethe, a German physicist who escaped from Nazi Germany and ended
up at Cornell University. Bethe was familiar with nuclear processes, and he
had also read Arthur Eddington’s conclusions that the temperatures in the
interiors of stars had to be on the order of hundreds of millions of degrees.
Using these results, Bethe was able to postulate a process whose result was the
squeezing together of hydrogen nuclei to form helium. The helium resulting
from this “fusion” process weighed less than the hydrogen that formed it,
and Bethe was able to show that the missing mass was converted to energy in
accordance with Einstein’s formula. The Sun was a giant furnace, converting
16 T H E M I L E S T O N E S O F S C I E N C E
4,200,000 tons of mass to energy every second. Because of the huge size of
the Sun, the Sun could continue to radiate heat and light for billions of years.
This process, which goes on in all stars, results in a delicate balance be-
tween the star’s radiation pressure, which makes the star expand, and its in-
ternal gravitation, which makes the star contract. At approximately the same
time, the Indian astronomer Subrahmanyan Chandrasekhar determined
that the outcome of this battle depended on the initial size of the star. The
very small stars gradually exhaust their fuel and cool to a dull red. In larger
stars, whose initial mass is less than about 1.4 times the Sun’s mass, the rate
of burning is faster and gravitational contraction is stronger. Such stars end
their lives as white dwarfs—extremely hot, but very tiny.
In the ensuing years, a more detailed study of the fusion mechanism in
even larger stars has been developed. After a star has burned its hydrogen to
helium, it contracts, and this contraction heats up the central core further.
This added heat enables helium to be fused to carbon. When the helium
has been consumed, the star contracts further, becoming hot enough to fuse
carbon to oxygen. And so it continues, with added contraction enabling oxy-
gen to be fused to neon, then silicon, sulfur and, finally, iron. These events
occur at an ever-faster rate. When the core of the star becomes iron, it can
no longer fuse. The long battle between radiation pressure and gravitational
contraction is won by the latter, and the star collapses and rebounds in one
of the most dramatic events in the Universe—a supernova explosion.
The lives of the stars are long, and man has only been observing the
Universe intensively for four hundred years. However, there are so many
stars in the Universe that, given the observing power of today’s telescopes,
sooner or later a really interesting event will be observed. “Sooner” came in
1987 with the discovery of Supernova 1987A, the first nearby supernova to
be observed in more than three hundred years. All the important predictions
of the theory were upheld. The theory of the lives of stars is of profound
importance to us, for it postulates that all the heavier elements, from the
calcium in our bones to the iron in our blood, are formed in supernovas. We
are, in a very real sense, intimately connected to the Universe: our very lives
are possible because of the violent deaths of stars.
As anyone in the advertising business can attest, good packaging can make
any product more attractive, even a product like an arcane physics concept.
The term “black hole” was coined by John Wheeler to describe a situation
A stronomy 17
that at first blush seems unbelievable. After an extremely heavy star becomes
a supernova, it leaves behind a remnant that has so much mass packed in
so small an area that there is no effective barrier to the force of gravitational
collapse. This procedure, initially conceived by the English astronomer John
Michell in 1783, was first described in full detail by J. Robert Oppenheimer
before he was called away from theoretical physics to head the Manhattan
Project, the top-secret World War II project to develop the first atomic
bomb. Like the Energizer Bunny, the supernova remnant just keeps going
and going and going—until the gravitational force is so strong that not even
light can escape. Then it is gone—from the Universe.
Initially the idea of a black hole attracted a good deal of interest in the
astrophysics community, but when no one could supply a candidate, that
interest dwindled. Then, in 1963, the astronomer Maarten Schmidt made
a discovery that was to incite new interest in the possibility of black holes.
Schmidt was studying an astronomical object known as 3C 273 with the
huge Mount Palomar telescope. 3C 273 looked like a star, but had a spec-
trum unlike that of any known star. In a burst of insight, Schmidt realized
that it would be possible to attach meaning to the spectral lines of 3C 273
if one assumed that 3C 273 possessed an extremely high red shift. The only
way objects possess a high red shift is if they are receding at an enormous
velocity. In view of the Hubble relationship between recession velocity and
distance, this meant that 3C 273 was an astounding 2 billion light-years
away. The only mechanism that astronomers could imagine that would make
an object that far away appear that bright was a black hole with the mass of
billions of suns, continually gobbling matter and converting it into radiation.
Other objects similar to 3C 273 were discovered and given the name “quasi-
stellar radio sources,” from which the term “quasar” is derived.
Four years later, British astronomer Anthony Hewish had assigned his
graduate student Jocelyn Bell to study quasars. She detected an extremely
unusual radio signal from one of them, a pulse with metronomic regularity.
Initially, Hewish could not conceive of a natural mechanism to account for
the regularity of the signals, and was considering the possibility that he and
Bell had stumbled upon a signal beacon from an extraterrestrial life-form.
Not entirely in jest, they referred to the object as LGM-1, where LGM stood
for “little green men.”
After several more months, four more such objects had been discovered.
Hewish was able to abandon the “little green men” theory when, on further
reflection, he realized that the signal could be a radio pulse from a rapidly
spinning neutron star. Neutron stars are the residue of the explosion of a
supernova, but the supernova is not quite massive enough to degenerate into
18 T H E M I L E S T O N E S O F S C I E N C E
a black hole. The star is so massive that the electrons and protons of its atoms
are forced together and annihilate each other’s electrical charge, leaving only
electrically neutral neutrons. The spinning star acts like a cosmic lighthouse,
with the radio beam regularly flashing past Earth, and this was the signal that
Bell had detected. The term “pulsar” is used to describe the spinning neutron
star which emits radio pulses.
It should be observed that while there is much inferential evidence for
the existence of black holes, the riddle of the quasars has not yet been un-
raveled to the satisfaction of all astronomers. In 1974, Joseph Turner made
careful studies of the rotation rate of a neutron star. Einstein’s theory of rela-
tivity predicted that such an object should radiate gravitational waves, and
the energy loss should slow down the spinning of the star. Turner’s studies
confirmed this, killing two birds at once: he had demonstrated the existence
of both neutron stars and gravitational waves.
Oppenheimer’s brilliance as a physicist was overshadowed—if that is the
correct word—by his experience in heading the Manhattan Project, which
some believe would not have succeeded with any other physicist in the posi-
tion Oppenheimer occupied. Oppenheimer was not only brilliant, but by
all accounts immensely charismatic—and his involvement in the Manhattan
Project persuaded others to sign on. After the conclusion of the war, his in-
fluence faded as a result of concerns about his association with members of
the Communist Party.
The Universe
THE STRUCTURE AND DIMENSIONS
OF THE MILKY WAY GALAXY
Not until the twentieth century did we realize how large the Universe really
is. We have also made the fascinating discovery that there is a simple correla-
tion between the size of the Universe and the age of the Universe.
Although the telescope was first invented and used for astronomical
purposes in the seventeenth century, observational astronomy only became
a popular pursuit in the eighteenth century with the invention of improved
lenses. One of those who became interested in this field was Charles Messier,
a Frenchman who was the first person to spot Halley’s comet when it re-
turned, as Halley had predicted, in 1758. This inspired him to spend his life
searching for comets.
A stronomy 19
Shapley’s work also continued the process, which had begun with Co-
pernicus, of demoting the Earth from the central position in the Universe.
Not only was the Earth not the center of the solar system, the Sun was not
even in the center of the Milky Way galaxy. The Milky Way galaxy has been
shown to have a spiral structure and the Sun is in one of the arms, about
two-thirds of the way from the center to the edge.
And that’s a good thing. According to current theory, the galactic habit-
able zone—that portion of a galaxy where intelligent life is most likely to
form—is some distance from the center of the galaxy, which contains the
greatest density of supernovae and other energetic cosmic events, the radia-
tion from which is capable of sterilizing planets some distance away. It may
be more exciting to be where the action is in the center of a galaxy, but
supernovae are—as they said of war in the 1960s—harmful to children and
other living things.
Until the twentieth century, any discussion of the eventual fate of the Uni-
verse was conducted by philosophers and theologians, as the question could
not even be accurately framed in a scientific setting. However, when Edwin
22 T H E M I L E S T O N E S O F S C I E N C E
Hubble discovered that the galaxies were all receding from one another, this
opened up the question to scientific debate.
If the galaxies are all flying away from one another, there are only three
possibilities. The first is that the expansion remains unchecked, and that
sooner or later each galaxy is alone in the Cosmos, unable to receive signals
from any other galaxy. The second possibility is that there is enough mass in
the Universe to reverse the expansion, and that the galaxies will all eventually
collide with one another. The Universe, born in a big bang, will end in a big
crunch. Third, there might be just enough matter to slow down the expan-
sion to zero, but not enough to cause a big crunch. Scientists have calculated
that the amount of matter needed to cause this last scenario is about three
atoms of hydrogen for every cubic meter of space. This amount of matter is
called the critical density.
If the only matter in the Universe is what we see through our telescopes,
then the density of the Universe is only about 2 percent of the critical den-
sity, and the Universe would expand forever. However, there is a lot more
matter out there that we cannot see through our telescopes.
The existence of this unseen matter, generally called “dark matter,” was
discovered by Vera Rubin, an astronomer who had earned her doctorate
working for George Gamow, one of the authors of the big bang theory.
Rubin decided to measure the speed at which various galaxies rotated. She
observed that galaxies rotated much more rapidly than they would have if
the only mass in these galaxies were the luminous mass we can see through
the telescopes. The actual rotation of these galaxies could only be explained
if there were large quantities of dark matter making them rotate faster.
Was there enough dark matter in the galaxies, and outside of them, to
cause the Universe to contract in a big crunch? The answer to that question
is still to be determined, and can only be answered by observation. However,
a recent theoretical development suggests that the actual density of the Uni-
verse is the critical density.
Currently, the measured density of the Universe is about 10 percent of
the critical density. At the time of the big bang, the actual density of the
Universe had to be either the exact critical density, above it, or below it. Had
the actual density been just a tiny bit above or below it, the 15 billion years
or so of galactic expansion would have caused the current measured density
to deviate tremendously from the critical density. Scientists were faced with a
huge credibility problem: why was the initial density of the Universe exactly
(to about sixty decimal places!) the critical density? This is known as the
“flatness problem.”
A stronomy 23
Just as the question of the origin of life on our own planet is one of the most
important questions that science has yet to answer, so is the existence of life
elsewhere in the Universe. Despite the legions of reports of alien abductions
and visits by flying saucers, there is not a single confirmed shred of evidence
pointing to the existence of life beyond the limits of the Earth’s atmosphere.
24 T H E M I L E S T O N E S O F S C I E N C E
Our Moon is totally dead. Life would have to be able to evolve and survive
at a temperature of 1000°C in an atmosphere of carbon dioxide and sulfuric
acid on Venus, and the Mariner landings on Mars gave hints of intriguing
chemistry but no sign of biology.
Scientists feel reasonably sure that, in order for life to exist, it must
evolve on a planet. However, since the detection of the planet Pluto by Clyde
Tombaugh in 1930, no new planet has ever been observed—and Pluto has
sadly been demoted from planetary status. The detection of Pluto was the
culmination of a search lasting many years, and all attempts to find another
planet in our own solar system have failed. To find a planet circling another
star, and consequently thousands of times further away from us than Pluto,
was until quite recently an impossible task.
Not only is a planet physically so small that it would be virtually im-
possible to see at so great a distance, it would also be hidden by the glare of
the star it is orbiting. Despite the improvements in technology, for decades
scientists felt that the optical detection of a planet would be virtually impos-
sible. The only possibility, many felt, would be the reception of a signal
from another life-form. Thus was born SETI, the Search for Extraterrestrial
Intelligence.
However, there was another possible way to detect the existence of a
planet. Although most people think that the planets in the solar system orbit
the Sun, in reality the planets orbit the center of mass of the solar system,
which is actually within the Sun. The Sun itself also orbits this center of
mass, and this produces detectable wobbles in the Sun’s rotation. Perhaps
these wobbles could be detected in other stars.
The generally accepted theory of the evolution of the solar system is that
the planets are the result of the gravitational collapse of a cloud of material
circling the Sun. In 1984, astronomers Bradford Smith and Richard Terrile
obtained photographs of a cloud of material circling the star Beta Pictoris.
Even though no planets were found, this was felt to be an extremely hopeful
sign, as it indicated that the processes that produced the solar system could
occur elsewhere.
Meanwhile, other scientists were still trying to detect wobbles in other
stars by using an extremely sensitive technique known as interferometry. In
1994, astronomer Alexander Wolszczan detected the first planets outside
the solar system. Scientists initially had a hard time crediting Wolszczan’s
discovery, because the presumed planet was circling a pulsar. According to
conventional theory, this planet should not have existed. A pulsar is created
when a massive star explodes as a supernova, leaving a rapidly rotating neu-
tron star. Why didn’t the supernova explosion destroy the planet? As of now,
A stronomy 25
no one knows—maybe the planet was captured from another nearby star, or
maybe our theories of how pulsars are created are in error.
The Wolszczan planet could not possibly serve as a source for extrater-
restrial life. For that, it is felt that a star much like the Sun would be the best
bet. However, in the quarter-century since Wolszczan’s discovery, the exo-
planet business has boomed, thanks both to improved wobble detection and
the addition of other weapons, such as transit detection, to the arsenal. There
are now more than four thousand known exoplanets, including several that
orbit sunlike stars in a zone that is habitable for life as we know it. Sometime
in the next few years, we will actually have technology that will enable us to
see the surface of some of these planets, and maybe detect signatures of life.
CHAPTER 2
The Earth
We have been as curious about the Earth, which lies under our feet, as we
have been about the planets and stars that lie above our heads and have been
out of reach for most of human history. Surprisingly, it has been almost as
difficult to piece together an accurate picture of the Earth as it has been to
piece together an accurate picture of the heavens.
If one looks at a sixteenth-century map of the surface of the Earth, it
gets a lot right—especially with regard to Europe, Africa, and a large part
of Asia. North America and South America are depicted less accurately, the
Arctic still less accurately, and Australia and the Antarctic receive almost no
mention. Although a reasonable amount was known about the surface of the
Earth, almost nothing was known of Earth’s history—how it came to be,
how old it was, and how it was structured.
Five hundred years later, we have reliable answers to those questions.
What we don’t have is a reliable answer to what is going to happen to the
Earth in the near term. We do know that some 2 billion years from now, the
Sun will expand to swallow the Earth, but a much more pressing question is
whether what we are currently doing will alter the habitability of the Earth
in the next century or so.
The nearest exoplanet is Proxima Centauri b, about 4.2 light-years from
Earth. Fortunately, it is located in the habitable zone, that region of space
surrounding a star that conceivably could support human life. We may
even be able to get a look at Proxima Centauri b sometime in the next few
decades—but unless we actually develop some method of traveling through
space at velocities close to the speed of light, it will take millennia—or
longer—to get there.
27
28 T H E M I L E S T O N E S O F S C I E N C E
Throughout human history, the Earth has been our home. It will be
our home for the foreseeable future. It behooves us to understand how it
functions and how we affect its functioning—because humanity isn’t going
anywhere soon.
Measurements
THE FIRST ACCURATE MEASUREMENT
OF THE SIZE OF THE EARTH
Geology
THE THEORY OF UNIFORMITARIANISM
There is only one requirement for becoming a scientist, and that is the urge
to satisfy a deep curiosity concerning the true nature of the world around us.
However, there is no requirement on the preliminaries that must be fulfilled
in order to become a scientist. James Hutton entered science after flirting
with one career and embracing another.
It is rare that a person receives a medical degree but never practices or
conducts medical research. However, after graduating from medical school,
Hutton became an agricultural chemist. Sensing that there were financial
opportunities in the budding chemical industry, Hutton established a factory
for the manufacture of ammonium chloride. He did so well at this that he
was able to retire at age forty-two to pursue his chief interest, the study of
the geological structures of his native Scotland.
This was the period of the Industrial Revolution, and it was becom-
ing clear that the study of geology had important economic consequences.
Correct location of canals and railways depended on knowledge of geologi-
cal conditions, to say nothing of the clues geology could yield concerning
potential mineral deposits. When Hutton began his work, the world’s most
respected geologist was Abraham Werner, a German who believed that the
Earth had originally been covered with water in which minerals had been
dissolved. Over time, solids precipitated out of the water to form the various
layers of rocks that covered the Earth. This theory came to be called neptun-
ism, from the Greek god of the sea.
Werner was one of the first to call attention to the possibility that it
had taken the Earth a long time to reach its current state. At the time of
Hutton’s investigations, conservative points of view once again dominated
the intellectual landscape. As noted in the previous section, during the early
seventeenth century, Bishop James Ussher had made a detailed study of the
time periods in the Bible and had come to the conclusion that the Universe
had been created in 4004 BCE. This date was unquestioningly accepted by
many religious and secular authorities in Hutton’s time, and to challenge it
ran risks, although the sanctions imposed were not as drastic as those man-
dated by the Catholic Inquisition a century and a half earlier on Giordano
Bruno and Galileo.
Through long study, Hutton became convinced of two fundamental
ideas concerning geology. The first was that the processes that shaped the
32 T H E M I L E S T O N E S O F S C I E N C E
Earth continued to operate even today, and they were processes that ran
slowly and at a uniform rate. As a result, Hutton’s theories were described by
the word “uniformitarianism.” Hutton was also convinced that the mecha-
nism driving the changes was the internal heat of the Earth. As Werner’s
theory was called neptunism, Hutton’s was called plutonism, in reference to
the deity controlling the heated nether regions.
Hutton published his conclusions in 1785, in a book entitled Theory of
the Earth. Although he reached several erroneous conclusions, among which
was the idea that the Earth had neither beginning nor foreseeable end, many
of his ideas form the basis of modern geology. Geological processes are cur-
rently seen as being driven by two different operating systems, the uniform
ones described by Hutton, and the catastrophic ones such as meteor im-
pacts, which are presently thought to be responsible for the extinction of the
dinosaurs.
Hutton is universally regarded as the father of modern geology, but if
the notes on which he was working had come to light earlier, he might have
achieved even greater fame. In 1947, scholars examined a Hutton manuscript
that had been undiscovered until then. In it, he outlined some of the basic
ideas on evolution by natural selection that were to occur to both Charles
Darwin and Alfred Wallace half a century after Hutton had suggested them.
Besides the types of motion they induce, the P and S waves have other,
different characteristics. P waves are much faster, and always are the first to
arrive after an earthquake. A P wave can travel through both solid and liquid,
but shearing is not possible in liquids, and so S waves only travel through
solid rock. Two other important properties of waves are reflection and
refraction—the refraction of light waves is responsible for the fact that a
straw in a glass of water seems bent at the interface between water and air.
Oldham showed that the different speeds of the P and S waves, combined
with their reflective and refractive properties, could be used as diagnostic
tools to probe the interior of the Earth.
Andrija Mohorovičić studied records from a Yugoslavian earthquake
that produced a second set of waves that mirrored the first set. He concluded
that the second set occurred when the first set bounced off a discontinuity
that marked the dividing line between the surface of the Earth (known as
the crust) and another distinct layer of material. This next layer is known
as the mantle. The dividing line is known as the Mohorovičić discontinuity
(or Moho).
It had long been suspected that at the center of the Earth there exists a
solid metallic core, simply because the density of the Earth was known to be
greater than the density of rock. Oldham was able to analyze the waves from
numerous earthquakes to construct a simplified model of the Earth with a
metallic core. In the next few decades, the sophistication of seismic detectors
improved, and the database of earthquake records expanded significantly.
These advances made it possible for Inge Lehmann, a Danish geologist, to
demonstrate that the Earth’s core actually consisted of two distinct layers,
an outer liquid layer and an inner solid core. As a result of these efforts, the
structure of the Earth is now known. Even though there are local variations,
this consists basically of a crust anywhere from 10 to 40 kilometers thick, a
mantle of molten rock that extends for another 2,800 kilometers, a liquid
outer core some 2,200 kilometers thick, and a solid metallic inner core with
a radius of 2,200 kilometers.
A truly great earthquake has the power to shake the entire Earth and set
it “ringing” like a massive spherical bell. Subtle information can be obtained
from the various different “tones” with which the Earth rings. The devices
used for this analysis are tomographic scanners, similar in principle to the
CAT scans used in hospitals. It was also discovered that the Sun also rings
like a bell, and the techniques employed in analyzing the Earth are used in
the new science of helioseismology to investigate the structure of the Sun.
34 T H E M I L E S T O N E S O F S C I E N C E
When an artist’s life is tragic, such as in the case of Mozart or Van Gogh,
it sometimes makes its way into the popular culture in the form of plays or
movies. When a scientist’s life is tragic, it does not seem to attract the same
attention.
Alfred Wegener was a respected German meteorologist in the first two
decades of the twentieth century. Wegener’s interests extended beyond me-
teorology, and like others before him, he was intrigued by the apparent close
fit between the west coast of Africa and the east coast of South America. Un-
like the others, he did not confine his investigations to the shape of the two
continents, but examined the geologic and fossil records of both continents.
It appeared to Wegener that similar rock strata and fossils could be found
on both continents. As a result, he proposed a theory of “continental drift.”
In this theory, all the continents had at one time formed a single land mass,
which had later fractured into the various continents. Over hundreds of mil-
lions of years, the continents had drifted apart.
There was a major difficulty with this theory. At the time Wegener
propounded it, no mechanism was known that would enable the continents
to drift apart. As for the similar fossil records in Africa and South America,
the geologists of the time proposed the existence of land bridges between
the continents, which had subsequently sunk beneath the sea. Wegener, the
geologists of the time suggested, should stick to meteorology.
Wegener had long been interested in Greenland, and had made three
successful expeditions there. His fourth expedition was a mission of mercy,
an attempt to bring food to a group of researchers who were running low
on supplies. When he arrived, there were not enough supplies remaining to
enable everyone to ride out the winter, so Wegener and a colleague took dog
sleds to try to make it to another camp. They never made it.
Ten years later, the world was plunged into the Second World War, and
it became important to obtain maps of the ocean floor. One person who
worked on this problem was Harry Hess, an American geologist who actu-
ally attained the rank of rear admiral in the U.S. Naval Reserve. In the early
1960s, F. J. Vine and D. H. Matthews made a startling discovery concern-
ing the structure of the ocean floor. It had been known since 1929 that the
earth’s magnetic field continually reversed its polarity after several hundred
thousand years. Vine and Matthews discovered evidence that the direction
of the Earth’s magnetic field was recorded in the rocks on the ocean floor in
adjacent parallel strips. The youngest strips are next to a sub-oceanic valley
T he E arth 35
with mountains on either side, which is known as a rift valley. The further
one travels from the rift, the older the magnetized strips of rock become.
Hess proposed that new ocean floor was formed by volcanic action at the
rift, and that the creation of new floor wedged the old strips further apart.
This meant that new ocean floor was being continually created, and since
the Earth was not getting larger, the old surface must be destroyed. Hess’s
theory was that the older portion of the surface would be destroyed by being
submerged and later melted by the intense heat of the Earth’s interior.
This theory was refined during the 1960s, as it was discovered that the
Earth’s surface consists of approximately a dozen large plates, which are
continually moving and colliding with one another. As the plates collide,
mountains are formed and one plate is submerged under the other, with the
destroyed plate surface being replaced by volcanic action at the rift valleys.
This theory, known as plate tectonics, not only explained why earthquakes
occurred at particular locations (where the plates were colliding), but also
provided the mechanism for Wegener’s continental drift—the continents
drift on the moving plates. Like Van Gogh and Mozart, Wegener had been
vindicated by a subsequent generation.
Although Hess was an acknowledged expert on the Earth’s oceans, he
also lent his expertise to NASA as it prepared for the first landing on the
moon. Just as Wegener was not to see the results of his work, neither was
Hess, who died a month before the successful mission of Apollo 11.
One of the most important discoveries in the earth sciences has been the
complicated interrelationship between the oceans and the atmosphere. It is
a story of exploration, of data gathering, of mathematical analysis, and of
physical modeling—and there’s a lot of science that is essentially the same
type of story.
All the oceans of the world are interlinked, but they are not just stag-
nant pools of water. In the middle of the eighteenth century, the Board of
Customs in Boston complained that mail packet ships from England took
two weeks longer to make the transatlantic crossing than did Rhode Island
merchant ships. Benjamin Franklin asked a Nantucket sea captain if he
could find an explanation, and the captain told him that the American ships
avoided the Gulf Stream on the westward leg, but the British ships paid no
36 T H E M I L E S T O N E S O F S C I E N C E
attention to it. Since the Gulf Stream moved at three miles per hour relative
to the surrounding water, the British ships were effectively trying to swim
upstream.
Franklin was the first to draw a chart of the Gulf Stream, the greatest of
the ocean currents. The explorer Alexander von Humboldt discovered an-
other major ocean current off the coast of Peru. Now called the Humboldt
Current, its behavior has a profound effect on world climate. At irregular
intervals, the normally cold Humboldt Current is deflected away from the
coast by a rush of warm water surging down from the equator. This is disas-
trous for the local economy, as the anchovy harvest on which it depends is
greatly reduced by warm water. More importantly, this El Niño condition,
as it is called, has a profound effect on the world’s climate, often producing
floods in the United States and drought in Africa.
The first great advance in understanding the physics of the ocean came
from an analysis performed in 1835 by a French physicist, Gaspard de Corio-
lis. He showed that the rotation of the Earth deflected moving air and water
eastward when moving away from the equator, and westward when moving
toward it. This deflection, which was mathematically proven some twenty
years later by William Ferrel, is known as the Coriolis force, and it has a
strong influence on the formation of both wind and water vortices. In the
northern hemisphere, these vortices circulate clockwise, while they circulate
counterclockwise in the southern hemisphere, as can be seen when watching
water swirl down the drain and rotating in different directions depending on
whether you are north or south of the equator.
The twentieth century saw an intensive investigation of the relationship
between the atmosphere and the ocean. Because of the importance of atmo-
spheric and oceanic behavior on the Scandinavian countries, many of the
major contributors in this area have been Scandinavians. Three generations
of the Bjerknes family—Carl, Vilhelm, and Jacob—have devoted their lives
to the study of the oceans and the atmosphere. The fronts that appear on the
daily weather map you see on TV use symbology devised by Jacob Bjerknes.
Vilhelm, the second of the three generations, not only corrected errors
in his father’s analyses, but was an extremely inspirational teacher. Among
his students were his son (naturally), and two of the twentieth century’s fore-
most scientists in this area. Vagn Ekman studied the effect that the Coriolis
force had on the top layer of the ocean; this layer is now known as the Ekman
layer. Another student, Carl-Gustaf Rossby, is responsible for discovering
the jet streams, those fast-moving currents of air which influence not only
the daily weather, but also the time necessary for airplane travel. In a sense,
T he E arth 37
the jet stream plays a role in the atmosphere similar to the one the Gulf
Stream plays in the ocean—affecting not only weather but transportation.
One of Rossby’s contributions to meteorology was that he was among
the first to apply computers to the problem of weather forecasting. Com-
puterized weather forecasting has improved substantially over the last half-
century; the seven-day forecasts of today are as accurate as the two-day
forecasts of the early 1970s. Computers are also currently being used to ana-
lyze and forecast future climate changes. A disturbing discovery is that the
coupled interplay of oceanic and atmospheric currents has resulted in bizarre
climatic shifts in the past, with the world being suddenly jolted into either
glacial or tropical conditions in extremely short periods of time. The next
great climatic shift may not come from the greenhouse effect, but from con-
ditions deep below the surface of the ocean. And warming and cooling seem
to be interlinked. Recent studies have shown that there is a strong correlation
between the warming of the Arctic and the severe winters in North America,
and the cause of this is the shifting of a current deep in the Atlantic Ocean.
the middle of valleys, which obviously had been transported to their present
locations. From several scientists and non-scientists he heard the conjecture
that the boulders had been brought from high atop the mountains to the
valleys by means of glaciers that had long since receded. Agassiz decided to
investigate for himself.
The alternative source of movement for these boulders was the “Great
Flood” described in the Bible. Raging rivers were known to have the ability
to move large boulders, and no one had yet amassed evidence to demonstrate
that glaciers could also do the job. Agassiz found that glaciers generally ter-
minated in rocks and boulders, and these rocks bore scratches and grooves
similar to those found by Perraudin. In 1839, Agassiz found a cabin that had
been built on a glacier in 1827, and had moved a mile down the glacier from
the original site. He then pounded a straight line of stakes deep into the ice;
within two years they had not only moved but were now shaped like a U.
This showed that the ice in the center moved faster than the ice on the sides,
which was slowed by friction with the surrounding mountains.
Agassiz was now persuaded that there had indeed been an ice age. He
found evidence that glaciers once existed in the British Isles. As a result of his
years of research, the existence of an ice age was finally established.
Modern science has uncovered the fact that ice ages have been a recur-
ring phenomenon; evidence exists that there were ice ages hundreds of mil-
lions of years ago. Now that we know that ice ages are a part of our history,
the question arises: what causes them? One of the most ambitious efforts
in this direction is the theory of the Yugoslavian physicist Milutin Milan-
kovitch, who published three papers between 1912 and 1914. In these he
hypothesized that there were two astronomical cycles that played a major
role in determining Earth’s climate—the 41,000-year cycle of the inclina-
tion of the Earth’s axis, and the 22,000-year oscillation of the Earth-Sun
distance. Milankovitch’s theory has generated a good deal of interest in the
meteorological community. Even though his original formulation seems to
have been disproved, every so often a scientist finds another cycle that, when
combined with the ones cited by Milankovitch, does an increasingly good
job of predicting climate conditions.
Another fascinating question is: when is the next ice age? In an era when
we are continually reminded of the possibility of overheating due to the
greenhouse effect, it would be ironic if we were next to suffer through a trial
by ice rather than a trial by fire. Recent computer simulations have raised
the disturbing possibility that the meandering of deep, cold, ocean currents
is chaotic in nature, and a dislocation of these currents could flip the Earth
into an ice age in less than a century.
T he E arth 39
It is only within the past few decades that ecology, a word coined by the Ger-
man philosopher and biologist Ernst Haeckel to describe the interrelation-
ship between living things and the environment, has become a recognized
scientific subject. However, it was the Swedish chemist Svante Arrhenius
who was responsible for the first discovery concerning the possible effects of
the activities of man on the environment.
Arrhenius was a top-notch chemist whose theory of the behavior of ions
resulted in his winning one of the first Nobel Prizes. In 1896, Arrhenius
noted that the gas carbon dioxide had the property of allowing the high-
frequency sunlight that the Earth received by day to pass through it, but it
reflected the low-frequency infrared light that the Earth reradiated as heat by
night. This meant that a buildup of carbon dioxide in the atmosphere, which
Arrhenius noted was taking place because of increased industrialization,
could be accompanied by an increase in heat. This was the first discussion of
the greenhouse effect. Later in the twentieth century it would be shown that
the planet Venus had succumbed to a runaway version of the greenhouse ef-
fect, resulting in an atmosphere of carbon dioxide at crushing pressures and
a planetary temperature of 475°C.
Ecology as a science was neglected during the first half of the twentieth
century. The key development in its emergence occurred as the result of a
correspondence between a naturalist author and a friend who owned a bird
sanctuary. Rachel Carson was an aquatic biologist with the U.S. Bureau of
Fisheries who had abandoned her youthful interest in writing to major in
zoology in college. While with the bureau, she prepared a series of radio
broadcasts on underwater life, and eventually published her first book, Under
the Sea, in 1941. Her second book, The Sea Around Us, published ten years
later, was an instant classic and a huge financial success.
The financial freedom she achieved enabled her to observe and write
about nature. A friend of Carson who owned a private bird sanctuary wrote
to Carson to describe the appalling effects of DDT spraying on the birds
within the sanctuary. DDT was an insecticide that had proved tremendously
beneficial in wiping out mosquito populations and greatly reducing the oc-
currence of malaria, but its effect on wildlife had not been thoroughly in-
vestigated. Carson’s study of the problem resulted in the book Silent Spring,
unarguably the single most influential book ever written on environmental
problems. Published in 1962, by the end of the year legislators had intro-
duced over 40 bills concerning pesticide regulation, and the environmental
movement as we know it today was born.
40 T H E M I L E S T O N E S O F S C I E N C E
Carson died in 1967, not living long enough to see the banning of DDT
in the United States in 1972. Environmental studies are now part of the
curriculum of many colleges and universities. A heightened awareness of the
impact of man on the environment has caused a major change in the actions
of business and industry, which must now file environmental impact reports
before major construction projects are authorized.
A measure of the increased consciousness concerning environmental
problems can be seen by looking at the history of chlorofluorocarbons
(CFCs), chemicals commonly used in refrigeration. In 1974, Mario Molina
and F. Sherwood Rowland warned that CFCs may be contributing to the
destruction of the ozone layer that protects the Earth from ultraviolet radia-
tion. This report was at first taken lightly. In the 1980s, satellites detected
the growth of a hole in the ozone layer over the Antarctic. As a result, the
Montreal Protocol resulted in the decision to totally phase out CFCs by
2000. In 1995, Molina and Rowland received the Nobel Prize in chemistry
for their work.
more closely, and so he took a shortcut. Rather than start all over again, he
took the output from the computer for a day midway in the sequence, and
used that output as the initial conditions. For a while, the computer results
duplicated the previous run, just as one would expect. Then, very slowly,
discrepancies began to appear. After some time, the results of the second run
bore no relation to the results of the first run.
It took Lorenz some time to realize what had happened. The computer
had computed and stored data to six-digit accuracy, such as .318297, but
typed them out only to three-digit accuracy, in this case .318. When Lo-
renz re-entered the numbers, he had entered only three digits. As a result,
the computer was presented with an initial condition of .318, rather than
.318297. This minuscule difference in initial conditions would have major
effects later in the computer run.
The fact that minuscule initial differences can have subsequent profound
consequences is now known as the “butterfly effect.” The name comes from
the notion that whether or not a butterfly flaps its wings in Hawaii can de-
termine whether or not there is a tornado in Kansas three weeks later.
The butterfly effect was to be the first example of a host of processes that
are now known as chaotic phenomena. Prior to their discovery, a process
would be described as either deterministic or random. A simple example of
deterministic phenomena would be the orbits of the planets, which are so
reliable that it is possible to predict eclipses centuries in advance. A flip of a
coin, or the decay of a radioactive atom, is an example of a random process.
Chaos is the study of those phenomena that appear on the surface to be pre-
dictable, but whose predictability turns out to be intrinsically limited.
Sometimes a discovery in science triggers a re-examination of many
other areas. Just as Newton’s mechanics spurred the search for mathematical
laws in areas encompassing all branches of human knowledge, Lorenz’s work
has led to the discovery of chaos in areas as diverse as the fibrillation of the
heart during a heart attack, and the behavior of stock markets during finan-
cial crashes. The realization that some phenomena are chaotic has expanded
the way we describe the Universe.
It has also exacerbated the fear that at some stage, we may inadvertently
load on the straw that breaks the camel’s back of the climate system. We do
not know whether what we are doing will precipitate an ice age, or a runaway
greenhouse effect like the one on Venus. But we had better pay attention to
the warnings the simulations give us, as one day we may awake to the fact that
those simulations have indeed been the distant early warning (DEW) line for
an ominous reality.
CHAPTER 3
Chemistry
Organizing Principles
Isaac Newton, my candidate for the most influential scientist in history, is
best known for his contributions to math and physics. However, he dedi-
cated a number of years in an attempt to do for alchemy what he had done
for physics, and didn’t accomplish anything of note.
In devising his theories of mechanics and gravitation, Newton had a lot
to work with. Tycho Brahe and Johannes Kepler had provided valuable data
on the orbits of the planets, and Galileo had come up with his law of falling
bodies. But there existed no data on subjects related to chemistry that would
have helped him, as the rules that formed the basic organizing principles of
chemistry were not to be discovered until nearly a century after Newton had
done his seminal work in physics.
burning, the substance released its phlogiston into the air. Substances that
burned especially well were rich in phlogiston; substances that did not burn
contained no phlogiston.
On the surface, this was certainly a plausible theory. It explained the fact
that some substances burned better than others, and it also explained why,
after burning, a substance was no longer capable of combustion. However,
all attempts to isolate phlogiston met with defeat.
Not only that, there were experiments whose results were at odds with
the phlogiston theory. When mercury or tin was burned, the weight of the
resultant material after burning was greater than the weight of the material
before burning. Phlogiston theory predicted that the substances, having lost
their phlogiston, would be lighter after burning.
Faced with the results of these experiments, the proponents of phlogis-
ton theory did what many scientists since have done—attempted to modify
the theory to fit the observed data. Phlogiston was hypothesized to be a
substance without weight, or possibly even lighter, and so under the right
circumstances, a substance could gain weight by losing phlogiston.
By the latter portion of the eighteenth century, air had been shown to be
a mixture of various gases, one of which (oxygen) had been shown not only
to enhance combustion, but to be necessary for respiration. Enter Antoine
Lavoisier, a French lawyer and tax collector turned chemist. He performed
an experiment that was both simple and elegant. First he heated mercury
in the presence of oxygen, carefully weighing the amount of oxygen and
mercury before, and the amount of material (mercuric oxide) and oxygen
after. Then he heated the mercuric oxide to the point where the oxygen
was released, again carefully measuring the amount of material with which
he started and with which he finished. From the results of this experiment,
he was able to conclude not only that combustion consisted of combining
with oxygen, but that in a chemical reaction—even though the substances
involve may change—the total quantity of reactants doesn’t change. With
one experiment, Lavoisier had not only demolished the phlogiston theory,
but established one of the most fundamental laws of chemistry, that of con-
servation of mass.
The idea of conservation, although not in the quantitative form in which
Lavoisier stated it, was broached more than two millennia before by the
Greek philosopher Epicurus, who believed that “the totality of things was
always such as it is now, and always will be.” The Arab scientist Nasir-al-Din-
al-Tusi nailed it more precisely when he wrote, “A body of matter cannot
disappear completely. It only changes its form, condition, composition, color
46 T H E M I L E S T O N E S O F S C I E N C E
and other properties and turns into a different complex or elementary mat-
ter.” But it was Lavoisier who actually established it quantitatively.
Lavoisier had the good fortune to be married to a woman who was of
substantial help to his work, but had the bad fortune to be living in France
during the time of the French Revolution. His work as a tax collector made
him a natural target of the political frenzy that was sweeping the nation, and
he was condemned to death by guillotine. When it was argued that Lavoisier
was a great scientist, the reply from the presiding judge was, “The Republic
has no need of scientists.” Sadly, the last few years have seen substantial evi-
dence that we are living in an era—and a republic—in which many have a
similar view as the presiding judge.
Commenting later on his death, the brilliant mathematician and physi-
cist Joseph-Louis Lagrange declared, “It took but a moment to sever that
head, though a hundred years perhaps will be unable to replace it.”
In 1961, the brilliant physicist Richard Feynman began the basic physics
course at Caltech with the following words: “If, in some cataclysm, all of
scientific knowledge were to be destroyed, and only one sentence passed on
to the next generation of creatures, what statement would contain the most
information in the fewest words? I believe it is the atomic hypothesis . . . that
all things are made of atoms—little particles that move around in perpetual
motion.”
The idea that all things are made of atoms, the smallest particles in the
Universe to retain their identity, goes back to the Greek philosophers. But
it is one thing to speculate on the ultimate constituents of matter, and quite
another thing to develop a workable theory that not only explained, but pre-
dicted. At the outset of the nineteenth century, it was known that substances
such as hydrogen and oxygen were elements, and that elements were capable
of combining into compounds. Water, for instance, was a substance made by
combining hydrogen and oxygen, and the same quantities of hydrogen and
oxygen always produced the identical quantity of water. What mechanism
could account for this?
John Dalton, a Quaker schoolteacher living in Manchester, England,
spent the summer of 1803 pursuing an extension of the Greek theory of
atoms. Unlike the Greeks, whose atoms were philosophical constructs, Dal-
ton’s atoms possessed a tangible physical property: weight. As Dalton wrote:
“An enquiry into the relative weights of the ultimate particles is, as far as I
C hemistry 47
know, entirely new. I have lately been prosecuting this enquiry with remark-
able success.” Dalton realized that, if one were to hypothesize that each ele-
ment consisted of identical atoms, all having exactly the same weight, this
would account for the manner in which the elements combined to produce
compounds.
Science has always been a highly conservative and hardheaded endeavor,
and new ideas tend to be treated with reserve bordering on skepticism.
However, so brilliant was Dalton’s atomic theory, and so strong its predic-
tive power, that it was accepted virtually instantaneously. During the course
of the next hundred and fifty years, the physical properties of atoms were
determined with ever-increasing accuracy, even though it wasn’t until the
1980s that atoms were first actually seen. Without Dalton’s atomic theory,
chemistry would be reduced to the hit-and-miss hodgepodge of mixing and
heating that characterized its predecessor, alchemy, and we would all be sig-
nificantly the poorer for it.
Dalton was a man of exceptionally regular habits. Every Thursday
he would take a walk through the English countryside to play bowls (no
bowling alleys existed in the nineteenth century), and every day for almost
sixty years he would meticulously record the temperature, rainfall, and air
pressure. During a lifetime he recorded more than 200,000 meteorological
measurements, a database probably unparalleled for the era, and one that was
never put to any noteworthy use. Or was it? Perhaps a lifetime of accumulat-
ing and reflecting upon meteorological data helped create the mindset that
enabled Dalton to devise the atomic theory.
AVOGADRO’S HYPOTHESIS
The list of great scientists includes many individuals who did not start out
to be scientists, but who embarked initially upon another career, such as law
(one suspects that the list of great lawyers who started out as scientists is an
extremely short one). A case in point is Amadeo Avogadro, who received a
doctorate in law, practiced for three years, and then became a scientist.
This transformation took place roughly around the turn of the nine-
teenth century, when much of the scientific world was turning its attention
to Dalton’s atomic theory. Chemistry had burgeoned during the latter por-
tion of the eighteenth century, and there was a concerted effort to explain
every experimental result in terms of the atomic theory.
One such result was Gay-Lussac’s law of combining volumes. A care-
ful experimenter, Joseph Louis Gay-Lussac had discovered that if two gases
48 T H E M I L E S T O N E S O F S C I E N C E
the molecules are separated by substantial distances (substantial, that is, rela-
tive to the size of the molecules themselves). So convinced by this reasoning
was Dalton that he suggested Gay-Lussac’s experiments must be in error, and
that Gay-Lussac should redo them.
Should you decide to cut up some beef, potatoes, carrots, and onions for
dinner, and cook them together, you know precisely what you will get—beef
stew. Moreover, you probably have a pretty fair idea of how it will taste. The
situation was nowhere as simple for chemists in the middle of the nineteenth
century.
By that time, the world’s chemists had discovered sixty-three elements,
the basic ingredients in the cosmic cookbook. The rules of the cosmic cook-
book, however, remained maddeningly elusive. For example, when sodium, a
lightweight fizzy metal, was “cooked” (chemically combined) with chlorine,
a poisonous yellow-green gas, the result was common table salt, sodium chlo-
ride, a compound that was neither metallic nor gassy, poisonous nor fizzy.
Until the rules of the cosmic cookbook could be discovered, the potential of
chemistry would be limited to hit-or-miss activity.
One of the fundamental discoveries of science is that many phenomena
in the natural world can be organized into a pattern. Dmitri Mendeleev, a
Russian chemist, decided to try to organize the known elements into a pat-
tern. To do so, he first arranged these elements in increasing order of atomic
weight, the same physical property that had attracted the attention of John
Dalton when he devised the atomic theory. He then imposed another level
of order by grouping the elements according to secondary properties such as
metallicity and chemical reactivity—the ease with which elements combined
with other elements.
The result of Mendeleev’s deliberations was the periodic table of the ele-
ments, a tabular arrangement of the elements in both rows and columns. In
essence, each column was characterized by a specific chemical property such
as alkali metal or chemically nonreactive gas. The atomic weights increased
from left to right in each row, and from top to bottom in each column.
When Mendeleev began his work, not all the elements were known.
As a result, there were occasional gaps in the periodic table—places where
Mendeleev would have expected an element with a particular atomic weight
and chemical properties to be, but no such element was known to exist. With
supreme confidence, Mendeleev predicted the future discovery of three such
50 T H E M I L E S T O N E S O F S C I E N C E
have been created that have had a deleterious effect. But overall, we have
produced, are producing, and will continue to produce better living through
chemistry.
Electrochemistry
Humphry Davy may have been the Horatio Alger of science, a story of rags
to riches. He was born burdened with debt, and as he did not enjoy school
he dropped out at the age of seventeen to become a pharmacist’s apprentice.
As so often happens, the value of an education became apparent only after
one leaves school, and so Davy began an extensive course of self-education.
His interests were initially diverse, as are the interests of many seventeen-
year-olds, but they became focused after he read a book on chemistry by the
great French scientist Antoine Lavoisier. At that time there wasn’t much of a
theoretical basis for chemistry, which probably bothered Davy not at all, as
he was the quintessential experimenter.
His favorite guinea pig was himself. After he became the superintendent
of an institution for the study of therapeutic uses of various gases, Davy
would think nothing of inhaling the products of his experiment to test their
effect. Fortunately, he never inhaled any cyanide, but he nearly suffocated
twice, once when he tried to breathe hydrogen, and once when he tried to
breathe carbon dioxide. However, one of these experiments paid very large
dividends. Davy discovered that the gas nitrous oxide made him feel giddy
and intoxicated, and also reduced the sensation of physical pain. His obser-
vations on this subject were initially ignored, but decades later nitrous oxide
became the first chemical anesthetic. It is still used today.
Davy was probably the first scientist to popularize science. When he was
hired to lecture for the Royal Institute, his lectures and demonstrations were
so interesting and well-presented that he soon became a darling of London’s
high society. As an experimenter, Davy was brilliant rather than meticulous.
He would get interested in a topic, and then experiment until boredom set
in, after which he would switch to another topic.
After learning of Volta’s development of the electric battery, Davy built
powerful batteries, and in 1805 developed arc lighting, in which a strong
current forces an electric arc to bridge the gap between electrodes. Like his
discovery of the anesthetic powers of nitrous oxide, arc lighting had to wait
decades before a practical use was discovered.
However, Davy’s most noteworthy contributions were in the field of
electrochemistry. It had been discovered that an electric current could be
52 T H E M I L E S T O N E S O F S C I E N C E
used to break up water into hydrogen and oxygen. At the time, substances
such as potash, lime, and magnesia were suspected of being metallic com-
pounds, but no one was able to demonstrate this. Davy used his powerful
batteries to pass an electric current through molten potash. This liberated
globules of the as-yet-undiscovered element potassium. On seeing the ap-
pearance of the shiny metallic drops, Davy danced around his laboratory in
glee. Within a week, he had isolated metallic sodium from soda (it is easy to
see where Davy found the names for his discoveries). Soon he had discovered
the elements magnesium, barium, strontium, and calcium as well.
Davy’s experiments showed the importance of electrochemistry to the
scientific world. However, perhaps Davy’s greatest contribution to science
was his hiring of Michael Faraday as his assistant. Where Davy was im-
petuous, Faraday was meticulous. Davy discovered elements, but Faraday
discovered laws. Among these were Faraday’s laws of electrolysis, which
demonstrated that there is a quantitative relationship between electricity and
chemistry. These laws would later prove to be very important in demonstrat-
ing that electricity was a stream of particles, a discovery that initiated atomic
physics.
Even as Davy was receiving the plaudits of the scientific world, he could
sense that his assistant Faraday would eventually supersede him. Davy be-
came jealous of the acclaim that Faraday was receiving, and when Faraday
was nominated for membership in the Royal Society, there was only one
negative vote—Davy’s. It is conceivable that this action was the result of the
systemic poisoning Davy had suffered in his early days as an experimenter,
when he would inhale or taste any compound. His health began to deterio-
rate substantially when he was only in his thirties. He suffered a stroke at the
relatively young age of forty-nine, and died two years later.
to synthesize natural organic compounds, but create new ones never seen in
nature. The plastics industry that has so revolutionized our lives is unques-
tionably the result of Perkin’s accidental discovery.
Perkin clearly possessed the Midas touch when it came to chemistry.
After ten years or so in the dye business, he was wealthy enough to retire and
pursue his first love, research. One of his first achievements was the synthesis
of coumarin, which is responsible for the pleasant smell of new-mown hay.
This represented the beginning of the synthetic perfume industry that, like
synthetic dyes, has annual revenues in the billions of dollars.
CHEMICAL BONDS
By the middle of the nineteenth century, chemists had confirmed the cor-
rectness of Dalton’s theory that every compound was constructed using a
fixed ratio of elements. Water was written H2O (as it still is), but the mecha-
nism by which two atoms of hydrogen combined with one atom of oxygen
to produce water was a mystery.
The first person to make a noticeable dent in this problem was the Ger-
man chemist Friedrich Kekulé. At the time, chemists had formulated a rough
idea of the valence, or combining power, of each element. Kekulé came up
with the idea that each molecule had a structural pattern, which could be
represented by lines joining atoms. For example, ethyl alcohol, which had the
chemical formula C2H6O, could be shown as
One of the unwritten rules that many scientific graduate students learn is
that it doesn’t pay to be too brilliant when you are writing a thesis. There
is a very good reason for this—in order to get to be a player in the science
game, you first have to be accepted by the scientific community. Getting a
doctorate requires doing something that the science community regards as
good science, and many brilliant ideas require a lot of time before they are
considered good science.
Svante Arrhenius had been a child prodigy. When the time came for him
to write his thesis, he chose as his subject electrolytes, substances that were
capable of conducting electricity when dissolved. Arrhenius proposed that
when a molecule of an electrolyte actually dissolved, it separated into charged
particles called ions, which enabled the current to flow. At the time, chemists
adhered to Dalton’s picture of the atom as indivisible, and so Arrhenius’s
theory was rejected by the majority of the scientific community. His thesis,
however, was given the lowest possible passing grade, possibly on the ground
that even though it was obviously erroneous, it was undoubtedly brilliant.
When a phenomenon is not completely understood, scientists may have
a majority opinion, but there is usually a skeptical minority. Arrhenius sent
copies of his thesis to several of the leading chemists, one of whom was Fried-
rich Ostwald. Ostwald was convinced of the validity of Arrhenius’s ideas and
helped to spread them, even as Arrhenius was continuing to gather evidence
to support his views.
Gradually, the chemists became increasingly convinced of Arrhenius’s
ideas, but the key development in establishing them occurred when J. J.
C hemistry 57
Thomson identified the electron, a subatomic particle, and when Henri Bec-
querel showed that radioactivity involved the breakdown of atoms.
Many of the ideas of both Arrhenius and Ostwald lay at the junction
of both physics and chemistry; indeed, the two scientists practically created
the subject of physical chemistry. An excellent example of the importance of
physical chemistry can be seen in the process of catalysis. Arrhenius realized
that chemical reactions usually proceed more rapidly when the reacting sub-
stances are heated. We see this every day in the kitchen; it takes a shorter time
to cook a roast at a high temperature than at a low one. Arrhenius suggested
that molecules needed to be supplied a certain amount of energy, the “energy
of activation,” in order to participate in a chemical reaction.
Ostwald, meanwhile, was busily applying Arrhenius’s ideas on ioniza-
tion to a different aspect of catalysis. Some chemical reactions, such as the
production of sugar from starch, are catalyzed by the presence of an acid.
Ostwald realized that this type of catalysis involved lowering the energy of
activation of the reacting substances.
Ostwald’s observations on catalysis immediately found application in
industry. Ostwald himself helped devise a procedure using platinum as a
catalyst in making nitric acid more efficiently. Because nitric acid is im-
portant in the manufacture of high explosives, Ostwald’s process enabled
Germany to produce explosives during World War I without importing raw
materials. The contribution of Ostwald to lengthening World War I must be
counterbalanced by the fact that Ostwald’s theories of catalysis were later to
help explain the activity of enzymes, and so Ostwald indirectly contributed
to the development of biochemistry and genetic engineering.
It is rather paradoxical that many of the contributions of Arrhenius and
Ostwald could be explained by the atomic theory of Dalton, yet Ostwald
himself did not believe in that theory until he was more than fifty years old!
Perhaps this explains the fact that Ostwald was such a strong early supporter
of Arrhenius’s ideas on ionization, which required the breakdown of atoms if
one believed in the atomic theory. Since Ostwald didn’t believe in the atomic
theory, this might have made it easier for him to agree with Arrhenius.
The theory of evolution has been under constant siege from the moment it
was first propounded, and attempts have been made in many states either to
have it removed entirely from the curriculum, or to denigrate its status. One
strategy has been to require that creationism, an alternate view of the Uni-
verse in which everything was created by a supreme being, must be taught as
well as evolution, placing the two theories on an equal footing.
In order to consider a suit by creationists to require the teaching of
creationism, a judge undertook the reasonable task of trying to discover
exactly what constitutes a scientific theory. He finally settled on a definition
of scientific theory from the philosopher Karl Popper: a scientific theory is
one that can be falsified. Experiments can be performed, or evidence can be
found, which demonstrate that the theory is false. Under that definition,
creationism is not a scientific theory because it is impossible to perform an
experiment or find evidence that will invalidate the fundamental hypothesis
of creationism.
Georg Stahl, a German who was a contemporary of Isaac Newton, was
a pioneer in the fields of chemistry and biology. He observed, he experi-
mented, he theorized, and he came up with two theories that were to have a
profound effect on the development of chemistry. As we have seen, his phlo-
giston theory on the nature of combustion was shown to be false by Antoine
Lavoisier in the latter portion of the eighteenth century. His other theory,
vitalism, held on substantially longer.
Vitalism can be summarized by saying that there are two sets of laws: one
governing inanimate objects and one governing living things. At the time
it was propounded, vitalism represented an attempt to retain some of the
mystic wonder with which religions viewed the phenomenon of life, while
at the same time incorporating aspects of the newly emerging sciences. With
the possible exception of perspiration, which consists primarily of the simple
chemicals water and salt, nearly every chemical associated with life could not
C hemistry 59
In vino, veritas—in wine, there is truth. The discovery of the truth of the
almost-miraculous procedure that turns grape juice into wine is a story that
60 T H E M I L E S T O N E S O F S C I E N C E
reaches far back into history, and climaxes with the birth of biochemistry in
the nineteenth century.
Wine has been around for almost ten thousand years, and its manufac-
ture amazed the ancients. In fact, so astounding was the transformation of
grape juice to wine that, in the Middle Ages, it effected a transformation on
the chemical theory of the time, which held that the world was comprised of
four elements: earth, air, fire, and water. The process by which grape juice
became wine must have involved a quinta essencia, a fifth element, which
reflected and shaped the unique form of living matter. This fifth element
uniquely characterized the life that possessed it, and our word “quintessen-
tial” reflects this characterization.
Fermentation, the process by which grape juice becomes wine, con-
tinued to be studied by many of the great scientists of the eighteenth and
nineteenth centuries. The brilliant French chemist Lavoisier showed that
the addition of a small amount of yeast to a sugar solution resulted in the
production of alcohol, thus classifying fermentation as a chemical reaction.
Since this reaction did not occur without the yeast, it was clear that the yeast
played a vital role in the process.
But what kind of a role was yeast playing? The German chemist Justus
von Liebig, one of the founders of organic chemistry, held that the role of the
yeast was to emit vibrations that accelerated the chemical reactions. At ap-
proximately the same time, the German biologist Theodor Schwann and the
French inventor Charles Caignard de la Tour discovered that yeast was actu-
ally a living entity. This led to the biological theory of fermentation, in which
sugar was ingested by the yeast, and alcohol and carbon dioxide excreted.
Louis Pasteur, one of the undoubted titans of science, had helped rescue
a tottering French wine industry by his discovery that bacteria could spoil
wine. Pasteur’s investigations resulted in the realization that yeast was com-
posed of living cells, and that the transformation of sugar into alcohol and
carbon dioxide was an integral part of their life. However, Pasteur believed
that this transformation depended upon the living condition of the yeast.
This was a somewhat unusual conclusion to reach for the man who had ef-
fectively destroyed the vitalistic theory of spontaneous generation—that life
could simply arise from nonliving matter.
The ultimate resolution of the question came in 1897. Two brothers,
Hans and Eduard Buchner, through a series of carefully designed and con-
structed experiments, managed to isolate zymase, the enzyme responsible
for transforming sugar into alcohol. Crushing the yeast cells through filter
paper, they obtained an extract that, although clearly not alive, was able to
convert sugar to alcohol. This experiment led eventually to the realization
C hemistry 61
that the processes of life were essentially chemical in nature, and that the
role of enzymes, of which zymase was one, was to accelerate the chemical
transformations that enabled living cells to transform raw materials into us-
able products.
Different sciences arise in different ways. Some are the result of a single
dazzling insight, such as Mendel’s formulation of the concept of the gene.
Others are the result of long years of observations and theorizing, false trails
and dead ends. Once the correct path is determined, however, the advances
often come with dazzling swiftness.
Although zymase was not the first enzyme to be discovered, it was the
first whose actions were observed in vitro (literally, “in glass”), without the
necessity of the participation of a living entity. A few years after the isolation
of zymase, Franz Hofmeister formulated the central dogma of biochemistry,
that all cellular processes would be shown to be controlled by enzymes.
Hans Buchner died in 1902, and was thus unable to share in the Nobel
Prize awarded to Eduard in 1907. Eduard continued a distinguished career
as a professor of chemistry until, in 1917, at the age of fifty-seven he volun-
teered for a second tour of duty in World War I. Captain Eduard Buchner
was wounded on the Eastern Front, and died two days later.
It was apparent to the chemists of the late eighteenth century that the proper-
ties of a substance depended upon its molecular composition. What was not
immediately apparent was that the properties of a substance also depended
upon the molecular architecture.
If one thinks of a house, for example, it is obvious that a house built of
brick will have different properties from a house built of wood. However,
two houses may be built from precisely the same number of bricks and be
radically different—the architecture can create very dissimilar buildings. One
may be light and airy, the other somber and foreboding.
Hints that the same idea might permeate chemistry began to crop up
early in the nineteenth century. One of the first to get an insight into this
was Louis Pasteur.
Pasteur’s first accomplishment as a scientist was to show that crystals
of tartaric acid consisted of two distinct types. One type of crystal polar-
ized light so that it bent to the right, and the other polarized light so that it
bent to the left. News of this discovery reached Jean-Baptiste Biot, one of
France’s greatest scientists. Biot had spent considerable time investigating the
62 T H E M I L E S T O N E S O F S C I E N C E
Matter
Complicated stuff is made of simple stuff, and one of the great quests of
science has been to understand what the simple stuff is, and how it works.
Chemistry is concerned with how the simple stuff fits together to form com-
plicated stuff, but physics is concerned with what the simple stuff is.
Phases of Matter
There are three fundamental phases of matter—solid, liquid, and gas. Al-
though the Greeks didn’t explicitly state it in this fashion, they believed the
world was constructed of four basic elements—earth, air, fire, and water.
The three fundamental phases are represented here—earth is solid, water
is liquid, and air is gaseous. It would be millennia before the atomic theory
revealed precisely what made water a solid (as ice), a liquid (as water), and a
gas (as steam).
A fourth phase of matter—plasma—was discovered in the late nine-
teenth century, but one could make a pretty good argument that the Greeks
had at least a hint of this, because fire—which generally results from heat
energy being applied to a combustible material—is in some sense analogous
to plasma, which consists of ionized particles and generates its own magnetic
field. Fire, too, consists of particles generating a form of energy.
Water, being the most common substance available—at least, available
in all three phases of matter—was a natural candidate for investigation. One
could see that as winter came, water in rivers and lakes transformed into ice,
and back to water again with the coming of spring. Similarly, heating water
causes it to become steam, which became water again when it condensed on
65
66 T H E M I L E S T O N E S O F S C I E N C E
tion was to measure the relation between the pressure applied to a gas and the
volume that it occupied. Using a J-shaped tube, Boyle demonstrated that the
volume of the gas was inversely related to the pressure. Double the pressure
and the volume halved; triple the pressure and the volume was reduced to a
third of the original volume. This relation is known as Boyle’s law.
A century later, the French balloonist Jacques Charles was also interested
in gases, although from a more practical point of view—he was, after all, a
balloonist, and balloons are filled with hot air. Charles was the first balloonist
to ascend to an altitude of more than 3,000 meters (about 10,000 feet). He
was able to accomplish this feat because he had read of Cavendish’s discov-
ery of the much lighter hydrogen, and he realized that the lifting power of
hydrogen would be far greater than that of air.
Charles’s chief contribution was to show the effect of temperature on the
volume of a gas. Again, this is perhaps not surprising in view of the balloon-
ist’s interest in heated gases. He discovered that different gases all expanded
the same fraction of their initial volume when the temperature was raised by
a given amount. For each degree Celsius that the temperature rose above 0°C,
the volume increased by 1/273; for each degree the temperature fell below
0°C, the volume decreased by 1/273. In retrospect, this can be seen as fore-
shadowing the concept of absolute zero—if you keep reducing the volume
by 1/273 for every degree the temperature is lowered, lower it 273 degrees
and the volume is zero, so you can’t lower it any more. And indeed, absolute
zero is almost exactly 273 degrees below 0°C.
Charles did not actually publish this result, but it was later discovered
(and published) by Joseph Gay-Lussac, who was by a curious coincidence
a fellow balloonist! Ballooning was not only a passion with Charles, and
the source of his scientific reputation, it also saved his life. He had the bad
luck to be in the Tuileries when it was invaded by an angry mob during the
French Revolution, but he had the presence of mind to recount some of his
ballooning anecdotes to the bloodthirsty mob that accosted him. He must
have been an exceptional storyteller, or had some truly fascinating anecdotes,
as they let him go.
carbon, iron, and oxygen, and that all the stuff on Earth was made up of
these. But what about the heavens?
Meteors had been observed as far back as prehistory, but they were felt
to be connected with the atmosphere—the word “meteor” is derived from
the Greek word for “atmospheric.” It wasn’t until the great meteor shower of
November 1833 that it was realized that the origin of meteors was extrater-
restrial. Prior to that, in 1807 Professor Benjamin Silliman of Yale University
had performed a chemical analysis of a meteor and shown that it contained
iron, so it was known that at least some of what made up the heavens was
made of the same stuff that made up the Earth. But the big breakthrough
occurred half a century later.
SPECTROSCOPY
With the atomic theory firmly in place by the end of the nineteenth century,
it was thought that the ultimate constituents of matter had been discovered.
The prevailing view of what an atom would prove to be, if indeed it were
ever possible to actually see one, was that it would be a small, hard, feature-
less sphere. Meanwhile, the nature of electromagnetic energy occupied the
attention of several physicists.
James Maxwell had shown that electromagnetism could be regarded as
waves, but in the late 1870s, William Crookes discovered that the rays emit-
ted by the cathode of a vacuum tube could be deflected by a magnetic field.
This convinced Crookes that these rays were actually particles carrying an
electric charge. Two decades later, J. J. Thomson was able to show that these
rays could also be affected by an electric field, and were therefore definitely
particles. By careful experimentation, Thomson was able to go even further,
measuring the charge-to-mass ratio of the particles. From this he deduced
that the particles, soon to be known as electrons, were extremely small, hav-
ing approximately 1/1837 the mass of a hydrogen atom. Since the hydrogen
atom was the smallest possible atom, electrons were substantially smaller
than atoms. The field of subatomic physics had dawned.
One of Thomson’s assistants was Ernest Rutherford, who had lived the
early portion of his life on a potato farm in New Zealand. Rutherford re-
ceived news of the offer of a scholarship at Cambridge while digging potatoes
on his father’s farm. Flinging aside his shovel, he declared, “That’s the last
potato I’ll dig,” and set sail for England.
Rutherford was initially interested (as was practically everyone else) in
the emissions given off by radioactive material. He embarked upon the study
of how these rays bounced off thin sheets of metal. He fired alpha particles,
which had a positive electric charge, at a sheet of gold only two thousand
atoms thick. When some of the alpha particles bounced almost straight back,
Rutherford realized that there had to be a concentrated region of positive
charge somewhere in the atom, as it would take a positive charge to repel the
positively charged alpha particles. However, since the vast majority of the
M atter 71
alpha particles passed straight through the sheet of gold without being de-
flected at all, Rutherford realized the atoms must be mostly empty space. He
therefore proposed a model of the atom in which the positive charges (pro-
tons) in the nucleus were packed in tightly around the center, surrounded by
electrons in the outer layers.
One of Rutherford’s assistants was Niels Bohr, a young Danish scientist
who had migrated to Rutherford after working with Thomson. The atomic
model proposed by Rutherford had a number of deficiencies, one of which
was its inability to explain why each atom had its telltale fingerprint of spec-
tral lines. Using the newly developed quantum theory, Bohr made a radical
assumption—that the electrons circled the nucleus like planets circling the
sun. Moreover, quantum theory required that the orbits of the electron could
only occur at certain specific distances from the nucleus—“in-between”
orbits simply were not allowed. Bohr’s theory explained the spectral lines oc-
curring in the hydrogen atom, and Bohr’s picture of the atom is, with some
modifications, basically the one that is held today.
In the early twentieth century, the best way to assure yourself of being
on the short list for a Nobel Prize was to be one of Thomson’s assistants.
Thomson himself received a Nobel Prize, as did Rutherford, Bohr, and five
other Thomson assistants. Rutherford and Bohr also did yeoman service in
helping Jewish scientists escape from Nazi Germany. When Denmark fell
to the Germans, Bohr helped most of the Danish Jews escape Hitler’s death
camps. In 1943, Bohr himself fled Denmark to Sweden, then flew in a tiny
plane to England, nearly dying en route from lack of oxygen. From there he
went to the United States, where he was one of the physicists at Los Alamos
who helped to develop the atom bomb.
As the nineteenth century came to a close, physicists around the world were
beginning to feel their time had come and gone. The eminent physicist
Philipp von Jolly advised his students to pursue other careers, feeling that
the future of physics would consist of the rather mundane task of measuring
the physical constants of the Universe (such as the speed of light) to ever-
increasing levels of accuracy.
Still, there were minor problems still unresolved. One of the unsettled
questions concerned how an object radiates. When iron is heated on a forge,
it first glows a dull red, then a brighter red, and then white; in other words,
the color changes in a consistent way with increasing temperature. Classical
72 T H E M I L E S T O N E S O F S C I E N C E
physics was having a hard time accounting for this. In fact, the prevailing
Rayleigh–Jeans theory predicted that an idealized object called a blackbody
would emit infinite energy as the wavelength falling on it became shorter
and shorter. Short-wavelength light is ultraviolet; the failure of the Rayleigh–
Jeans theory to predict finite energy for a radiating blackbody exposed to
ultraviolet light came to be known as the “ultraviolet catastrophe.”
The Rayleigh–Jeans theory operated under a very commonsense
premise—that energy could be radiated at all frequencies. An analogy would
be to consider the speed of a car; the car should be able to travel at all veloci-
ties up to its theoretical limit. If a car cannot go faster than 100 miles per
hour, for instance, it should be able to move at 30 miles per hour, or 40 miles
per hour, or 56.4281 miles per hour.
One day in 1900 the German physicist Max Planck, one of those advised
by von Jolly to consider another major when he entered the university, made
a bizarre assumption in an attempt to escape the ultraviolet catastrophe. In-
stead of assuming that energy could be radiated at all frequencies, he assumed
that only certain frequencies were possible. Continuing the analogy with the
speed of the car, Planck’s hypothesis would be something like only speeds
that were multiples of 5, like 25 miles per hour, 40 miles per hour, and so
on would be possible. He was able to show almost immediately that this
counterintuitive hypothesis resolved the dilemma, and the radiation curves
he obtained matched the ones recorded by experiment. That day, while
walking with his young son after lunch, he said, “I have had a conception
today as revolutionary and as great as the kind of thought that Newton had.”
His colleagues did not immediately think so. Planck was a respected
physicist, but the idea of the quantum—energy existing only at certain
levels—was at first not taken seriously. It was viewed as a kind of mathemati-
cal trickery that resolved the ultraviolet catastrophe, but did so by using rules
that the real world did not obey. Planck’s idea languished for five years, until
Einstein used it in 1905 to explain the photoelectric effect. Eight years later,
Niels Bohr used it to explain the spectrum of the hydrogen atom. Within an-
other twenty years, Planck had won a Nobel Prize, and quantum mechanics
had become one of the most fundamental theories of the real world, explain-
ing the behavior of the world of the atom and making possible many of the
high-tech industries of today.
With the coming of the Nazis, German science suffered severely. Many
of the leading scientists were either Jewish or had Jewish relatives, and fled
the country. Many others reacted with abhorrence to the Nazi regime, and
also departed. Planck, although deploring the Nazis, decided to stay in
Germany. It was to be a tragic decision. In 1945, Planck’s younger son was
M atter 73
executed for his part in the “Revolt of the Colonels,” the unsuccessful at-
tempt by several members of the German armed forces to assassinate Hitler.
When John Dalton first proposed his atomic theory, the atoms he envisioned
were immutable, and the atoms of any particular element were identical in
shape, size, and mass. As the twentieth century began to unfold, and scien-
tists became able to discern the structure of the atom, the validity of these
hypotheses became open to doubt.
The first of Dalton’s atomic characteristics to tumble was the idea that
atoms were immutable. Interestingly enough, the hypothesis of immutabil-
ity was one of the most noteworthy differences between Dalton’s atomic
theory and the ideas of the old alchemists. The alchemists felt that the ele-
ments could be transformed into one another, and a great deal of effort was
expended in a futile search for the “Philosopher’s Stone,” whose touch would
change lead into gold. In 1902, Ernest Rutherford and Frederick Soddy
conducted a series of experiments with the newly discovered radioactive
materials. They showed that radioactive elements were subject to spontane-
ous decay; radioactivity consisted of an emission of particles whose absence
actually transformed the radioactive element into a different element. With
remarkable insight, Rutherford and Soddy suggested that this transformation
took place at a subatomic level. Their discovery necessitated a revision in
the atomic theory, and was eventually to lead to Rutherford’s views on the
existence of an atomic nucleus.
Soddy spent the most productive portion of his career working on
phenomena associated with radioactive decay. He helped discover that lead
was the end product of all radioactive decay series. He also devised a law to
explain radioactive decay, called the radioactive displacement law.
At the time, scientists had noted that two different types of particles
could be emitted during radioactive decay, which they had named alpha
and beta particles. Soddy observed that when an alpha particle was emitted,
both the atomic weight of the element from which the particle was emitted
and the nuclear charge decreased by two. With the aid of hindsight, we can
see that this is explained if an alpha particle consists of two protons and two
neutrons, but the neutron would not be discovered for almost twenty years.
When a beta particle was emitted, the atomic weight did not change, but
the nuclear charge increased by one. It would later be understood that this
74 T H E M I L E S T O N E S O F S C I E N C E
X-RAY CRYSTALLOGRAPHY
New Age adherents are only the latest of a long line of people to be fascinated
by crystals. The beauty and symmetry of many forms of crystals, to say noth-
ing of their rarity, have caused them to be highly valued for thousands of
years, as well as being suspected of having mystical properties.
The scientific investigation of crystals can be traced to Nicolaus Steno, a
seventeenth-century Danish scientist who was one of the first to suggest that
fossils were the petrified remains of long-dead animals. Steno observed that
when a crystal broke, it did so not in random pieces, but in straight planes
that always met at characteristic angles. This observation would later become
known as the first law of crystallography.
M atter 75
ATOMIC NUMBERS
Long before the concept of a “Dream Team” had come into existence, the
great experimental physicist Ernest Rutherford had assembled his version
of one. Rutherford, his coworkers, and students represented the cream of
the early-twentieth-century crop of physicists, winning no fewer than seven
Nobel Prizes.
None of those Nobel Prizes went to Henry Moseley, whom many feel
might have been the greatest member of the Dream Team. Moseley joined
Rutherford at Manchester University in 1910, after having excelled as a stu-
dent at Eton and Oxford. After getting his feet wet studying beta emission
from radium, Moseley then began an X-ray study of the elements.
X-rays had been discovered less than twenty years earlier by Wilhelm
Roentgen, but already scientists were taking advantage of their extremely
short wavelength to probe the structure of matter. Two pioneers in this area
were the father-and-son team of William Henry Bragg and William Law-
rence Bragg. The Braggs had recently discovered that when elements were
excited by X-rays, the spectrum contained several bright lines that served as
a signature of the element.
Any work done with elements was performed against the background
of Mendeleev’s periodic table of the elements. However, there were still
problems with the exact structure of the periodic table. When Mendeleev
originally compiled the table, he arranged the elements in increasing order
of atomic weight. Although there was no way for Mendeleev to know it, ar-
ranging elements in this order was to introduce both errors and complexities
into his periodic table. Because the atomic weight of an element depended
on the isotopic composition of the element that Mendeleev had studied,
M atter 77
ANTIMATTER
Such a particle had never been detected, and Dirac’s results were greeted
with skepticism. However, events shortly vindicated Dirac. Carl Ander-
son was an American physicist who was studying cosmic rays, immensely
energetic particles generated in interstellar space. These particles were so
energetic that they could not be studied with ordinary cloud chambers. An-
derson inserted a lead plate to slow down the particles, thus making them
accessible to study. One day Anderson was examining the track of a particle
that appeared to be identical to an electron, but where an electron curved
in one direction in response to a magnetic field, Anderson’s particle curved
in the opposite direction. Anderson’s particle was the first antimatter to be
discovered, and it was indeed the one Dirac had discovered in his equations.
When matter and antimatter meet, they annihilate each other in a burst
of energy. One of the prevailing mysteries of physics is why the Universe is
mostly made of ordinary matter. Why don’t entire galaxies made of antimat-
ter, with antistars, antiplanets, and perhaps antipeople exist? In 1995 scien-
tists created antihydrogen, an atom with an antielectron orbiting around an
antiproton. The atom lasted for only a few millionths of a second before it
was annihilated in a collision with ordinary matter.
NUCLEAR FISSION
Leo Szilard, who had been born in Hungary of Jewish parents, was one of the
first to see the writing on the wall. A brilliant physicist whose work had taken
him to a position on the faculty of the University of Berlin, he recognized
that there was no future for him in Germany after Hitler came to power, and
went to England. In 1934, while walking the streets of London, he invented
the concept of the chain reaction. His original idea involved the fission of the
metal beryllium into helium atoms, and although he could not demonstrate
the process, he could describe it. Which he did, in the process taking out a
patent on the procedure. As he could see the military potential inherent in
the idea, he kept the patent secret.
Back in Germany, Lise Meitner felt relatively safe even though she was
Jewish, as she was an Austrian national. She stayed in Germany in order to
continue her research collaboration with Otto Hahn, with whom she was
to work for more than thirty years. One of the earliest women to pursue
a career in science, she had been the victim of antifeminist prejudices, as
the director of the laboratory in which she worked initially refused to let
her in the laboratory when men were working. She and Hahn were deeply
involved in the process of studying the behavior of uranium under neutron
80 T H E M I L E S T O N E S O F S C I E N C E
bombardment when Hitler annexed Austria. Despite the fact that Meitner
had served as a nurse in the Austrian Army in World War I, she knew that
Germany was now extremely dangerous for her. Dutch scientists enabled her
to come to the Netherlands without a visa, and Niels Bohr then helped her
obtain a position in Sweden.
Fritz Strassmann replaced Meitner as Hahn’s partner, and in early 1939
they published a paper describing the results of their experiments with ura-
nium. Initially, they suspected that the bombardment had created the radio-
active element radium, which is chemically similar to barium. As a result,
they treated the bombarded uranium with barium. However, they could find
no evidence of radium. When they published the results of their work, they
carefully avoided the suggestion that the uranium atom had fissioned, with
the lighter barium as a result.
Reading their results in Stockholm, and of course being familiar with
much of the work, Meitner immediately reached the conclusion that the
uranium atom had indeed fissioned, and was the first to publish a report
concerning the possibility. Szilard, now in the United States, recognized that
uranium fissioning into barium made a much more realistic candidate for a
chain reaction than his proposed beryllium-helium reaction. He immediately
contacted two other expatriate Hungarian physicists, Eugene Wigner and
Edward Teller. The military potential that had led Szilard to keep his patent
secret was now grimly apparent. The three physicists visited Albert Einstein,
perhaps the only scientist in the world with the power to influence policy,
and persuaded Einstein to write a carefully worded letter to President Frank-
lin Roosevelt apprising him of the situation.
Roosevelt was sufficiently impressed that he took the initial steps that
were to culminate in the Manhattan Project, the multiyear, two-billion-
dollar effort that would produce the first atomic bomb.
Despite the plethora of scientific talent at America’s disposal, all those
involved realized that their German counterparts included not only Hahn
but Werner Heisenberg, undoubtedly one of the most brilliant physicists in
the world. Although the actual story is still not completely known, none of
the German scientists who worked on the German atom bomb project were
Nazi sympathizers, and as a result the project never received top priority
from Hitler. Hahn and Heisenberg were taken into custody by American
forces at the end of the war in Europe, and it was while being interned in
England that they were notified of the bombing of Hiroshima. Hahn felt
personally responsible, and for a while considered suicide. Like many (but
by no means all) of the scientists associated with nuclear fission, he became
M atter 81
Scientists tend to view the world either visually or symbolically, and there
have been brilliant scientists of each type. However, as physics probed ever-
deeper into the subatomic world in the first few decades of the twentieth
century, it became harder and harder to visualize the phenomena that were
occurring. As a result, some physicists, of which Werner Heisenberg was
one, preferred to attempt to treat the subatomic world through symbolic
representation alone.
Heisenberg had the good fortune of being one of Niels Bohr’s assistants,
and as a result was thoroughly familiar with Bohr’s “solar system” model
of the atom. However, Bohr’s model was running into certain theoretical
difficulties, and several physicists were trying to resolve them. One was
Erwin Schrödinger, whose solution entailed treating the subatomic world
as consisting of waves, rather than particles. Heisenberg adopted a differ-
ent approach. He devised a mathematical system consisting of quantities
known as matrices, which could be manipulated in such a fashion as to
generate known experimental results. Both Schrödinger’s and Heisenberg’s
approaches worked, in the sense that they accounted for more phenomena
than Bohr’s atomic model. In fact, the two theories were later shown to be
equivalent, generating the same results using different ideas.
In 1927, Heisenberg was to make the discovery that would not only win
him a Nobel Prize, but would forever change the philosophical landscape. In
the late eighteenth century the French mathematician Pierre Laplace made
a statement that would characterize scientific determinism. Laplace stated
that, if one knew the position and momentum of every object in the Uni-
verse, one could calculate exactly where every object would be at all future
times. Heisenberg’s uncertainty principle states that it is impossible to know
exactly where anything is and where it is going at any given moment. These
difficulties do not really manifest themselves in the macroscopic world—if
someone throws a snowball at you, you can usually extrapolate the future
position of the snowball and maneuver to get out of the way. On the other
hand, if both you and the snowball are the size of electrons, you’re going to
have a problem figuring out which way to move, because you will not know
where the snowball will go.
Heisenberg’s uncertainty principle is sometimes erroneously inter-
preted as an inability on the part of fallible humans to measure phenomena
sufficiently accurately. Rather, it is a statement about the limitations of
knowledge, and is a direct consequence of the quantum-mechanical view of
the world. As a fundamental part of quantum mechanics, the uncertainty
84 T H E M I L E S T O N E S O F S C I E N C E
If atoms are not things, what are they? Seventy-five years after Heisenberg’s
revelation, physicists—and philosophers—are still struggling with this
question.
The description of Werner Heisenberg given above undoubtedly pres-
ents a picture of an intellectual struggling with deep and profound questions.
It would be hard to reconcile this image with the title of a song by the Roll-
ing Stones: “Street-Fighting Man.” Yet, at the end of World War I, Werner
Heisenberg was indeed a street-fighting man, engaging in pitched battles
with Communists in the streets of Munich after the collapse of the German
government following the war. Perhaps this can be regarded as a youthful
indiscretion, as Heisenberg was only a teenager at the time.
One such classic example was Dalton’s atomic theory. As a result of the
assumption that matter consisted of atoms, it was easy for chemists to work
out the bookkeeping of chemical reactions. A chemist could write down the
equations of a chemical reaction and predict in advance which substances,
and in what quantity, would be produced. However, for more than a century
after Dalton, individual atoms could not conclusively be shown to exist. To
some scientists, Dalton’s theory might have been simply a convenient math-
ematical description, a formalization that would tell you what would happen
without necessarily telling you the way things actually were.
The rise of quantum mechanics in the first quarter of the twentieth
century produced a more sophisticated version of this problem. The math-
ematical formulation of quantum mechanics viewed electrons as probability
waves rather than actual things. To some, this was merely a convenient
mathematical fiction. After all, a bunch of electrons constituted an electric
current, which ran motors, and the electrons in the outer shells of atoms
reacted chemically to form real substances such as water. How could they
not be real themselves?
Albert Einstein championed the “reality” point of view, and to illustrate
the problem, he and physicists Boris Podolsky and Nathan Rosen devised a
thought experiment, which has come to be known as the EPR experiment.
According to the laws of physics, it is possible for two photons (call them A
and B) to be emitted so that the total spin (a quantum-mechanical property)
of the two photons is known. Quantum mechanics dictated that the spin of
a photon is not known until it was measured, and that the act of measuring
this photon is part of the input that determines the result of the measure-
ment. Consequently, once the spin of photon A has been determined, the
spin of photon B could be calculated.
Einstein, Podolsky, and Rosen objected to this. Before the measure-
ments, neither spin is known. Suppose two groups of experimenters, light-
years apart, set out to measure the spins of these photons. If the spin of
photon A is measured, and seconds later the spin of photon B is measured,
quantum mechanics predicted that photon B would “know” the result of the
measurement on photon A, even though there would not be enough time
for a signal from photon A to reach photon B and tell photon B what its
spin should be!
According to Einstein, this left two choices. One could accept the so-
called Copenhagen interpretation of quantum mechanics, due primarily to
Niels Bohr, that photon B knows what happened to photon A even without
a signal passing between them. Alternatively, one could believe that there is
a deeper reality, manifested in some physical property as yet unfound and
86 T H E M I L E S T O N E S O F S C I E N C E
unmeasured, which would supply the solution to the above dilemma without
having to believe in photons possessing faster-than-light intuition. Einstein
died holding firmly to this latter view—the physical property as yet unfound
and unmeasured would be called a “hidden variable.”
In 1964, the Irish physicist John Bell showed that if hidden variables
existed, then experiments could be performed in which mathematical rela-
tionships known as “Bell inequalities” would manifest themselves. Within
a few decades, hardware would be constructed enabling these experiments
to be performed. Since then, sophisticated experiments with ultrafast lasers,
which were not possible during Einstein’s lifetime, have all but shut the door
on Einstein’s belief. Recent versions of what are called “quantum eraser”
experiments have left only the narrowest of loopholes for Einstein’s views to
squeeze through. Maybe it really is true that the Universe is not only stranger
than we imagine—it is stranger than we are capable of imagining.
QUARKS
One of the great triumphs of science is the Standard Model of physics, which
has taken centuries to formulate. It tells us what the components of matter
are, and the forces that enable matter to interact with other matter. Work,
in physics, is a measure of how much of an interaction has taken place, and
energy is the capacity for doing work. A good measure of technological and
scientific progress is how much work we are able to do, and how well we are
able to harness and store energy to do that work.
Thomson, who later became known as Lord Kelvin, engaged the pair in con-
versation. The man was James Prescott Joule, the woman his wife, and they
were in the Alps on their honeymoon. Joule had devoted a substantial por-
tion of his life to establishing the fact that, when water fell 778 feet, its tem-
perature rose 1 degree Fahrenheit. Britain, however, is notoriously deficient
in waterfalls, and now that Joule was in the Alps, he certainly did not intend
to let a little thing like a honeymoon stand between him and scientific truth.
A new viewpoint had been arising in physics during the early portion
of the nineteenth century: the idea that all forms of energy were convertible
into one another. Mechanical energy, chemical energy, and heat energy were
not different entities, but different manifestations of the phenomenon of
energy. James Joule, a brewer by trade, devoted himself to the establishment
of the equivalence between mechanical work and heat energy. These experi-
ments involved very small temperature differences and were not spectacular,
and Joule’s results were originally rejected by journals and the Royal Soci-
ety. He finally managed to get them published in a Manchester newspaper,
which might have published them because Joule’s brother was the paper’s
music critic. Joule’s results led to the first law of thermodynamics, which
states that energy cannot be created nor destroyed, but only changed from
one form to another.
Some twenty years before Joule, a French military engineer named
Nicolas Carnot had been interested in improving the efficiency of steam
engines. The steam engine developed by James Watt was efficient, as steam
engines went, but nonetheless still wasted about 95 percent of the heat used
in running the engine. Carnot investigated this phenomenon and discovered
a truly unexpected result: it would be impossible to devise a perfectly efficient
engine, and the maximum efficiency was a simple mathematical expression
of the temperatures involved in running the engine. This was Carnot’s only
publication, and it remained buried until it was resurrected a quarter of a
century later by William Thomson (Lord Kelvin), just one year after his
chance meeting with Joule in the Swiss Alps.
Carnot’s work was the foundation of the second law of thermodynamics.
This law exists in several forms, one of which is Carnot’s statement concern-
ing the maximum theoretical efficiency of engines. Another formulation of
the second law, due to Rudolf Clausius, can be understood in terms of a
natural direction for thermodynamic processes: a cube of ice placed in a glass
of hot water will melt and lower the temperature of the water, but a glass of
warm water will never spontaneously separate into hot water and ice.
The Austrian physicist Ludwig Boltzmann discovered an altogether dif-
ferent formulation of the second law of thermodynamics in terms of prob-
F orces and E nergy 91
ability: systems are more likely to proceed from ordered to disordered states
(which explains why a clean room tends to get dirty, but dirty rooms do not
tend to become clean). The first and second laws of thermodynamics seem to
appear in so many diverse environments that they have become part of our
collective “understanding” of life: the first law says you can’t win, and the
second law says you can’t even hope to break even.
Carnot, Joule, and Boltzmann came at thermodynamics from three
different directions: the practical (Carnot), the experimental (Joule), and
the theoretical (Boltzmann). They were linked not only by their interest in
thermodynamics, but by difficult situations bordering on the tragic. Carnot
died of cholera when he was only thirty-six years old. Joule suffered from
poor health and a childhood spinal injury all his life and, though the son of
a wealthy brewer, became impoverished in his later years. Boltzmann was
bipolar, and suffered from depression so severe that despite a rich circle of
family, friends, and admiring students (among whom was Lise Meitner, who
helped discover nuclear fission), he committed suicide because he feared his
theories would never be accepted. Sadly and ironically, his work was recog-
nized and acclaimed shortly after his death.
There are basically two types of discoveries in science. Some discoveries are
carefully planned, the results of well-designed experiments or lengthy obser-
vation of a particular phenomenon. Some discoveries, however, happen al-
most entirely by chance. Perhaps it is not surprising that most of the amusing
anecdotes surrounding a discovery relate to those that happened by chance.
92 T H E M I L E S T O N E S O F S C I E N C E
a week he had formulated the right-hand rule known to all physics students.
Ampère was the first to attempt a mathematical treatment of electrical and
magnetic phenomena, which would reach fruition with Maxwell’s analysis
some forty years later.
Georg Ohm was affected by the French Revolution only in the most
indirect fashion. Raised in a poor family by a father who was a mechanic,
Ohm’s highest aspiration was to receive an appointment to teach in a uni-
versity. After the French Revolution, the French mathematician Jean Fourier
had accompanied Napoleon on an expedition to Egypt, where he had de-
vised a novel theory on the nature of heat flow. Ohm decided to apply Fou-
rier’s ideas on heat flow to the flow of electric current in a wire. He was able
to show that the amount of current was directly proportional to the potential
difference and inversely proportional to the amount of resistance in the wire.
This relationship, known as Ohm’s law, was not initially recognized as
an achievement that would merit a university position. As a result, Ohm was
forced to endure both professional disappointment and financial hardship.
Gradually, his work began to be more appreciated outside his native Ger-
many, and he was made a member of the Royal Society in England. Eventu-
ally, Ohm’s law was recognized (even in Germany) as one of the key results
in the theory of electrostatics, and he became a professor at the University
of Munich.
Coulomb, Ohm, and Ampère each received one of the highest forms of
praise that the scientific community can bestow, having units of measure-
ment named in their honor. Electrical charge is measured in coulombs,
resistance in ohms, and current in amperes.
There is a curious link connecting Coulomb and Ohm—Henry Cav-
endish, one of the most eccentric scientists who ever lived. Cavendish com-
pletely avoided women, even to the extent of ordering his female servants to
stay out of his way or else they would be fired. He never left his house except
to attend meetings of the Royal Society. He never changed his style of dress,
which was extremely old-fashioned for that era (although everything from
that era seems old-fashioned to us). There is only one portrait of Cavendish
extant, and it looks as if he was wearing clothes that were a century out of
date. Cavendish was a brilliant scientist, but his notebooks and manuscripts
remained hidden for nearly a century after his death. When they were discov-
ered, it was found that he had anticipated both Coulomb’s law and Ohm’s
law, the latter by almost fifty years.
F orces and E nergy 95
Faraday also possessed a keen intuition regarding the electrical and mag-
netic forces he was studying. Many of the great scientific advances are made
possible by new ways of conceptualizing phenomena. Faraday visualized
electricity and magnetism as consisting of lines of force permeating space,
with stronger forces creating a greater concentration of lines in a particular
region. This method of visualizing electricity and magnetism led to the idea
of a field, a type of mathematical description that occupies a central position
in physics.
Michael Faraday’s parents apprenticed him to a bookbinder in London,
so that he might learn a trade. This turned out to be an ideal situation for
Faraday, as it gave him plenty of opportunity to read, especially about the
subject that most appealed to him, science. In 1812 the famous chemist, Sir
Humphry Davy, gave a series of lectures on chemistry for the general public.
Faraday attended these lectures, and took copious notes. He then wrote the
equivalent of a fan letter to Davy, who was so impressed that he gave Faraday
a job as his assistant. Though lacking the formal education possessed by most
scientists, Faraday was quick to make his mark in both chemistry and phys-
ics. He was made a member of the Royal Society, and when Davy retired,
Faraday was given Davy’s professorship.
At the age of forty-eight, Faraday joined a long line of scientists, includ-
ing both Davy and Isaac Newton, who had suffered a nervous breakdown.
Some of these nervous breakdowns were undoubtedly psychological in na-
ture, such as Boltzmann’s. Quite possibly, though, Faraday’s may have been
brought on by exposure to toxic chemicals, as the chemists of the day had no
idea of the hazards of the chemicals with which they were in daily contact.
SUPERCONDUCTIVITY
Light
Light is not just a phenomenon; it is a metaphor. “Let there be light,” ac-
cording to the Bible, was the first thing God did after creating the heavens
and the earth. Light is seen as good, as contrasted with the evil represented
by darkness.
Probably no question in science has created more controversy over a lon-
ger period of time than the nature of light. Greek and medieval philosophers
alike puzzled over it, alternating between theories that light was a substance
and that light was a wave, a vibration in a surrounding medium.
The debate found two great physicists in the seventeenth century to cham-
pion each viewpoint—perhaps the first of the really great scientific debates—
as it takes great concepts and great scientists to make for a really epic scientific
debate. Isaac Newton, when he wasn’t busy with mathematics, mechanics, or
gravitation, found time to invent the science of optics. Newton believed that
light consisted of tiny particles, as that would explain the nature of reflection
100 T H E M I L E S T O N E S O F S C I E N C E
that was even more efficient at turning silver chloride black. This color is
now known as ultraviolet.
More than eighty years later, Maxwell had explained that visible, infra-
red, and ultraviolet light were all examples of electromagnetic radiation. In
theory, electromagnetic waves of any length should be detectable. Heinrich
Hertz had constructed an electrical circuit that generated an oscillating spark
across a gap between two metal balls. Maxwell’s equations predicted that
such an oscillating spark should produce electromagnetic waves. Using a
simple detector consisting of a loop of wire terminating in two small metal
balls separated by an air gap, Hertz was able to pick up the wave by moving
around the room; at various points a spark would jump across the gap in the
detector loop at the moment another spark was generated in the primary
circuit.
Hertz calculated that the wavelength of the electromagnetic radiation
generated by the spark was 66 centimeters, a million times the wavelength
of visible light. This experiment served to confirm Maxwell’s theories, but
it was to have an even more far-reaching effect. Within fifteen years, Gug-
lielmo Marconi was able to devise a practical way to communicate with these
“Hertzian waves.” Nowadays, “Hertzian waves” are much better known as
radio waves, and Marconi’s invention was, of course, the radio. You may well
have found yourself slightly annoyed by a modern version of Hertz’s experi-
ment when you accidentally generate a spark while your radio is on, as your
radio will then act as a detector for the spark by emitting static.
William Herschel was unquestionably the outstanding observational as-
tronomer of his day, but of course his astronomy was limited to visible light.
He would have been extremely pleased to know that the infrared radiation
he discovered has been exploited in observational astronomy, as have ultra-
violet radiation and radio waves. In fact, observational astronomy is currently
conducted throughout the entire electromagnetic spectrum. Many of the
great discoveries in astronomy over the past three decades have been made
by telescopes built to observe electromagnetic radiation outside the range of
visible light. WMAP, the Wilkinson Microwave Anisotropy Probe, placed
the age of the Universe at 13.8 billion years and its composition at 27 per-
cent ordinary matter and 63 percent dark matter. SOFIA, the Stratospheric
Observatory for Infrared Astronomy, discovered that helium hydride was the
first molecule to be formed in the Universe.
F orces and E nergy 103
The phrase “the Doppler effect” sounds like the title for a thriller or science-
fiction movie. It is a phenomenon familiar to all of us, and it lies at the heart
of a variety of everyday—and not-so-everyday—devices.
Christian Doppler was an Austrian physicist in the first half of the
nineteenth century. It was not a good time to be an Austrian physicist, and
Doppler had a difficult time obtaining an academic position. As many did
at that time, he made plans to immigrate to the United States, the land of
opportunity. Just as he was getting ready to leave, he received an offer of a
professorship in Prague. As a result, Doppler stayed in Europe.
The Doppler effect was originally discovered in conjunction with sound
waves. Early in the seventeenth century, it had been noticed that sound failed
to travel through a vacuum, but would travel through air and water. Such
behavior was characteristic of waves, and in the eighteenth century, Marin
Mersenne had computed the speed of sound in air to an accuracy of 10 per-
cent. Newton had actually been the first to attempt a mathematical analysis
of sound, and by the early nineteenth century the behavior of sound waves
as generated in organ pipes or vibrating strings was fairly well understood.
Doppler, along with many others, had observed that the pitch of a
sound, which is the aural perception of its frequency as a wave, varies if the
sound is being generated from a moving source. The sound seems more
highly pitched as the moving source approaches the listener, and more deeply
pitched as it moves away from the listener. This phenomenon can easily be
observed by listening to the whistle of an approaching train or, since trains
are not as common as they used to be, the siren of an approaching police car.
Doppler correctly reasoned that the moving source should impart its
motion to the waves. As the moving source approaches, the wave crests
reach the listener more rapidly, thus increasing the frequency and raising the
pitch. As the moving source departs, the wave crests take longer to reach the
listener, with the opposite effect.
Once he had worked out the equations, Doppler conceived of one of the
most charming experiments in the history of science to test his conclusions.
He managed to get several trumpeters to sit on top of a flat car that was being
pulled by a locomotive. The trumpeters were instructed to play a particular
note, and the train would proceed at a set speed either toward Doppler or
away from him. Two days of experiments confirmed his deductions.
Although the Doppler effect was first applied to sound waves, it can be
used for light waves as well. In everyday life, the Doppler effect is used in
“speed guns,” which can determine not only the speed of a thrown fastball,
104 T H E M I L E S T O N E S O F S C I E N C E
but also the speed of a moving car. More importantly—at least from the
standpoint of this book—is that the Doppler effect is responsible for one of
the most profound deductions in the history of science. In the 1920s, Edwin
Hubble observed that the light from the majority of galaxies has been “red
shifted”—that is, the frequency of the emitted light waves had decreased.
From this he deduced that most galaxies are moving away from the Earth,
and was able to derive a relationship between their speed of recession and
their distance from us. This has not only helped us determine the size and
age of the Universe, it also was an important step on the road to realizing
that the Universe began in a big bang.
As of this writing, the Doppler effect is one of several ways to search for
planets outside the solar system. The gravitational effect of large planets on a
star causes that star to wobble slightly, and the light from that star is Doppler-
shifted by the wobble. This can be picked up by sensitive instruments. As
of this writing, almost five thousand exoplanets have been discovered, nearly
one thousand of them through the use of the Doppler effect.
two speeds were identical, implying that the Earth was standing still. This
result was obviously wrong, since it had been known from the time of Galileo
that the Earth moved through the Universe.
This totally unexpected result forced scientists to think again about the
nature of light. Two European physicists, George Fitzgerald and Hendrik
Lorentz, found that they could resolve the problem under the apparently
absurd assumption that objects contracted as they moved more rapidly. This
would explain the Michelson–Morley result, as the measuring device would
contract in the direction the Earth was moving, and thus the speed of light
would appear to be the same no matter how the Earth moved.
In 1905, Albert Einstein published his theory of special relativity, which
explained how the hypothesis offered by Fitzgerald and Lorentz was war-
ranted. Einstein assumed that the laws of physics were the same in any two
systems moving at a constant velocity relative to each other. This forced the
speed of light to be the same in any two such systems. In the same year,
known to physicists as Einstein’s “Miracle Year,” he also explained the photo-
electric effect by assuming that light behaved as a particle. Special relativity
showed that there was no need for the ether to exist as a frame of reference,
and the particle theory of light eliminated the need for a medium through
which light waves would travel. The current thinking of physicists is that
light is both wave and particle; and the resolution of this apparent paradox
is one of the foundations of modern physics.
Michelson’s interferometer has proved to be one of the most important
measuring devices ever invented. Indeed, one of the staples of modern radio
astronomy is the technique of VLBI (Very Long Baseline Interferometry), in
which radio telescopes at opposite ends of the Earth are linked by computer
to resolve the structure of extremely distant objects. Interferometers are
similar to telescopes in that the larger the lens, the more powerful the tool,
and VLBI enables astronomers to create interferometers whose “lenses” are
effectively as large as the Earth itself.
Other Forces
By the last decade of the nineteenth century, physicists had identified two
major forces—gravity and electromagnetism. There were similarities between
the two—both could easily be seen to be manifested in everyday life, and
both obeyed an inverse-square law; objects three times as distant from one
another exerted forces that were only one-ninth (1 divided by 32) as strong.
106 T H E M I L E S T O N E S O F S C I E N C E
When something is passed from father to son for several generations, the
assumption is that what is being passed is either control of the family busi-
ness or the old homestead. In the case of the Becquerels, however, it was the
professorship of applied physics at the Museum of Natural History in Paris.
108 T H E M I L E S T O N E S O F S C I E N C E
In 1929, the Swiss physicist Felix Bloch studied these substances, and
formulated a theory involving the population of different bands by electri-
cal charge. According to Bloch, under normal circumstances the electrons
of a semiconductor’s atoms were trapped in what he called valence bands.
However, if the semiconductor received the right amount of energy from an
outside source, electrons can jump from the valence bands to what Bloch
called forbidden bands, and this enabled current to pass.
In 1945, three physicists at Bell Telephone Laboratory, John Bardeen,
Walter Brattain, and William Shockley, created the first transistor. The
name “transistor” was derived from the fact that the semiconductor could
be manipulated to transfer current across a resistor. A transistor could both
rectify current (allow it to pass in one direction only), but even more im-
portantly, it could amplify current—if a small electrical charge was passed to
the transistor, a larger electrical charge would be emitted. These properties
allowed the much simpler, stabler, and cheaper transistor to perform func-
tions that had previously been done by the more complicated, erratic, and
expensive vacuum tubes.
This is not a book about invention, but the importance of the transis-
tor to science, as well as to everyday life, simply cannot be overestimated. It
not only revolutionized consumer electronics, it made possible many of the
great achievements of science and engineering in the last half of the twentieth
century. The first transistor was an ungainly affair consisting of solid-state
devices soldered to a plate with protruding wires. By the end of the twentieth
century, millions of transistors would be combined in devices called micro-
processors, which are the “brains” behind practically every piece of advanced
electronics produced today.
The importance of the transistor was quickly recognized, as the 1956
Nobel Prize in physics went to Bardeen, Brattain, and Shockley. Bardeen
was later to share a second Nobel Prize for his share in developing the BCS
(Bardeen–Cooper–Schrieffer) theory of superconductivity. Shockley was
to later become infamous for taking the controversial position that racial
differences on IQ tests might be the result of genetic factors, as opposed to
environmental ones.
Life
Varieties of Life
Even before written records, man undoubtedly noticed the other forms of
life that could be seen by the naked eye—plants, insects, animals, fish, and
birds. Life existed in abundant varieties, and an obvious first move was to
categorize the various forms of living organisms.
somewhat to accommodate not only the new animal and plant species that
are continually being discovered, but life-forms such as bacteria and viruses
of which Linnaeus was entirely unaware.
Linnaeus lived in an era in which religion, philosophy, and science were
not as separate as they are today, and his life reflects this. He believed that
he had been directed by God to oversee this project, and thought of people
who did not agree with him as heretics. Despite these traits, reminiscent of a
religious zealot, he was known to be a caring and inspirational teacher who
trained many future scientists to continue his work.
If there is one quality associated with science in the public mind, it is doubt-
less intellectual brilliance. A person who is brilliant is sometimes described
as “an Einstein,” and to indicate that you don’t need to be a genius to do
something, we say, “It doesn’t take a rocket scientist to . . .” We know that
the achievements of Newton and Einstein transcend our abilities—even if we
had been given Kepler’s data or the results of the Michelson–Morley experi-
ment, we wouldn’t have come up with the theory of universal gravitation (in
the former case) or the theory of special relativity (in the latter).
So it may be surprising that one of the most significant achievements
in the history of science could have been performed by anyone with good
eyesight who had happened to look in the right place at the right time. It
was actually a Dutch grocer, Anton van Leeuwenhoek, who had the curios-
ity to examine a drop of rainwater under the lens of a microscope, and it
was Leeuwenhoek who thus became the first person to observe the world of
bacteria—the invisible zoo.
Leeuwenhoek was an extremely competent observer who thought noth-
ing of observing the same object as much as a hundred times to make sure
that he had assimilated all the details. His skill was such that he had become
a “foreign correspondent” for London’s Royal Society, communicating his
observations along with detailed drawings. Nonetheless, his first description
of the invisible zoo must have been hard for the members of the Royal So-
ciety to credit. “They stop, they stand still as ’twere upon a point, and then
turn themselves around with that swiftness, as we see a top turn around,
the circumference they make being no bigger than a fine grain of sand.”
The Royal Society promptly commissioned Robert Hooke, England’s finest
microscopist, to build a microscope sufficiently powerful so that Leeuwen-
hoek’s findings could be confirmed—as, of course, they were.
116 T H E M I L E S T O N E S O F S C I E N C E
It would probably not be a difficult task to write a book entitled Louis Pas-
teur’s Top 20 Contributions to Science. Pasteur appears here yet again, as he
is undoubtedly one of the seminal figures in the history of science. Here
we examine one of Pasteur’s lesser-known contributions, which would have
made the career of almost any other scientist.
Pasteur was one of those rare individuals who view adversity as simply
another challenge. After suffering a stroke in 1868 that almost killed him,
L ife 117
and left him partially paralyzed, he still managed to produce some of his fin-
est achievements.
By this time, the process of respiration was fairly well understood. It was
known that plants use sunlight to manufacture carbohydrates from water and
carbon dioxide, and produce oxygen as a waste product during this process.
Animals breathe the oxygen and consume the carbohydrates, producing car-
bon dioxide as a waste product. This is the great cycle involving plants and
animals, the two primary kingdoms of life. It was felt that all forms of animal
life were oxygen-breathers.
One of Pasteur’s major interests throughout his life was the process of
fermentation. It had been Pasteur who discovered that fermentation was
caused by microorganisms. He had also saved the French wine industry by
recommending that wine be heated in order to sterilize the organisms that
were responsible for souring the wine.
In 1872, Pasteur happened to observe that air inhibited the movements
of the bacteria that were responsible for changing sugar solution into butyric
acid. Pasteur, whose intuition was legendary, immediately realized he had
discovered an extremely interesting phenomenon. Further investigation re-
vealed that oxygen was the inhibiting factor. Pasteur coined the word “anaer-
obic” to describe bacteria whose actions are limited by air. Today we know
that there are two types of anaerobic organisms—those that function poorly
in air, and the “obligate anaerobes” that are killed by exposure to oxygen.
We have also found that anaerobic bacteria exist in environments where
one might suspect life does not exist. Anaerobic bacteria can be found in
extremely salty environments, as well as extremely hot ones. Anaerobic
bacteria are often found in hot springs, and some strains can even survive at
temperatures close to that of boiling water.
One of the most interesting discoveries of anaerobic bacteria occurred
in 1977. John Corliss and Robert Ballard, aboard the submersible Alvin,
discovered the first “black smokers” near the Galapagos Islands. These are
deep-sea vents that belch out superheated streams of hydrogen sulfide. This
hydrogen sulfide was the energy source for a strain of anaerobic bacteria, and
the bacteria formed the base of a food chain that involved large, ornately
gilled tube worms and giant clams. This thriving community was located so
deep in the ocean that it was impossible for sunlight to penetrate. The “black
smoker” communities are the first known assemblage of life that does not
rely on energy produced by the Sun, and might conceivably be descendants
of the original life-forms to inhabit Earth. It also prompted many to consider
that similar forms of life might have arisen elsewhere in the Universe.
118 T H E M I L E S T O N E S O F S C I E N C E
was small enough to pass through the pores of the filter. Moreover, whatever
caused the disease was alive, as the infective agent would grow in a healthy
plant and could be passed on to another healthy plant. Beijerinck named the
unseen agent a “filterable virus,” virus being the Latin word for “poison.”
Beijerinck went on to demonstrate that numerous diseases, among
which were polio, mumps, chickenpox, influenza, and the common cold,
were caused by viruses. However, the nature of viruses remained unknown
until Wendell Stanley, an American biochemist, performed an experiment
whose results were completely unanticipated: he managed to crystallize the
tobacco mosaic virus. This was not only an extraordinary experimental
achievement, but one that raised a question still being debated today: are
viruses alive? One of the criteria for life is that it be able to reproduce. Viruses
can reproduce, but they cannot do so on their own. They must have a host
cell whose reproductive machinery they can commandeer for this purpose.
The development of the electron microscope, which can magnify more
powerfully than can optical microscopes, finally made it possible for scien-
tists to see viruses and confirm Pasteur’s intuition. Recent developments have
made it possible to deduce the structure of viruses. A typical virus consists
simply of a molecule of DNA surrounded by a protective protein coat. It is
the smallness and simplicity of viruses that make them so difficult to defeat;
there are only a limited number of strategies available to destroy a virus or
prevent it from reproducing. This explains why, even though science (actu-
ally technology) can put a man on the moon, it can’t cure the common
cold—yet.
Had Wendell Stanley been born forty or fifty years later, he might have
led the New England Patriots to their many Super Bowl victories, rather than
Bill Belichick. As an undergraduate at Earlham College, Stanley played foot-
ball and expected to be a football coach. While visiting the University of Il-
linois, he got involved in a conversation with a chemistry professor, and soon
found himself more interested in chemical equations than football diagrams.
ANIMAL BEHAVIOR
and Hooke, occupied by its rebuilding and other aspects of science, never did
any further work in microscopy.
Possibly because Hooke was such an unpleasant person, history has been
reluctant to credit him with another significant contribution: he anticipated
some aspects of Darwin’s theory of evolution by almost two centuries. He
used his microscope to examine fossils and, noting that no such creatures
were still around, he wrote, “There may have been divers Species of things
wholly destroyed and annihilated, and divers others changed and varied,
for we find that there are some Kinds of Animals and Vegetables peculiar
to certain Places and not to be found elsewhere.” Although speculation is a
useful adjunct to science, science requires proof, and Darwin (and Wallace)
amassed the evidence to confirm Hooke’s ideas.
PHOTOSYNTHESIS
There are two great kingdoms of life on Earth: plants and animals. They exist
in glorious harmony with one another, each quite literally existing by taking
in the other’s dirty laundry. Animals take in oxygen, the waste product of
plants, and produce carbon dioxide via respiration, or breathing. Plants take
in this carbon dioxide and produce both oxygen (which animals breathe) and
starch (which animals eat) via photosynthesis.
L ife 125
this was done by chlorophyll inside the chloroplasts, and Calvin had deter-
mined the precise chemical reactions. From the discovery of photosynthesis
to its complete description had taken almost two centuries.
The study of photosynthesis illustrates how critical technology can be
to scientific advance. It took Calvin the better part of a decade to obtain
his results, and he was using radioactive tracers and paper chromatography,
the high-tech tools of his day. Modern technology can perform the same
tasks thousands of times faster. Today, scientists use lasers that deliver light
pulses lasting less than a billionth of a second to determine precisely how
photosynthesis proceeds inside chlorophyll and bacteriorhodopsin, another
light-sensitive molecule.
Every so often science stalls after an obviously great advance. This happened
around 1840, after Schleiden and Schwann had put forth the theory of cells.
The primary difficulty was that cells are composed mostly of transparent
material, which made observation of the crucial details of cell processes ex-
tremely difficult.
However, just as there are times when progress stalls, there are also those
times when a fortuitous discovery in another area turns out to be a catalyst
for extremely rapid advance. The discovery by William Perkin of synthetic
dyes in the early 1860s turned out to be a boon for cytologists, the scientists
engaged in the investigation of cells.
Perkin, of course, thought of his discovery of synthetic dyes from the
commercial standpoint of making colorfast fabrics. The cytologists, most
notable Paul Ehrlich and Walther Flemming, soon learned that certain dyes
were selectively absorbed by different portions of the cell. This technique
was called staining.
Ehrlich and Flemming each used staining in a different fashion. Ehrlich,
who was a student of Robert Koch, the father of bacteriology, used staining
first in the identification of the germs responsible for different diseases. He
then adapted the chemistry of dyes to the treatment of disease.
Flemming, however was a biologist, and he used the technique of stain-
ing to investigate processes within the cell. He coined the term “chromatin”
from the Greek word for color, to describe the material within the cell that
absorbed the dye.
Flemming dyed a large number of cells in growing tissue, and it was
inevitable that some of these cells would be caught in different stages of the
L ife 127
cell division cycle. As the process of cell division began, the chromatin orga-
nized itself into short, threadlike rods that Flemming called “chromosomes,”
a Greek term meaning “colored bodies.” Whenever cell division took place,
the appearance of these chromosomes was so characteristic that Flemming
named the process “mitosis,” from the Greek word for thread.
The key feature of mitosis is that the chromosomes double in number in
the original cell. The chromosomes are then pulled apart to opposite ends of
the cell, with half the chromosomes going to each end. The cell then divides
into two cells. Because the chromosomes doubled in the original cell, each
of the two resulting cells now has the same number of chromosomes as the
original cell.
Mitosis ensures that, when cell division takes place, the two resulting
cells are duplicates of each other. However, this is not the procedure that
takes place in sexual reproduction, when the sperm cell and ovum join. The
chromosomes from each cell do not double, but rather one from the sperm
cell intertwines with the corresponding chromosome from the ovum cell in a
process called “crossing over.” The intertwined pair then splits up. This pro-
cess, discovered by the Belgian cytologist Edouard Van Beneden, is termed
“meiosis.” The key aspect of meiosis is that a chromosome resulting from the
union of sperm cell and ovum receives half of its genes from the sperm cell
and half from the ovum. The discovery of meiosis provided the biological
explanation for Mendel’s laws of genetics, and is the reason why a baby may
have her father’s eyes, but her mother’s nose.
Flemming actually realized that the process of mitosis did not occur in
sexual reproduction, but was unable to document what actually happened.
When Van Beneden worked out the details of meiosis, the two key parts of
the puzzle for the understanding of genetics were present: what happened
in meiosis, and Mendel’s laws. Unfortunately, Mendel had died a few years
previously, and his results had been ignored because statistics, with which
Mendel had worked, was an unfamiliar tool to botanists. It wasn’t until
fifteen years later that Mendel’s work was rediscovered, and its significance
might not have been realized had the explanation of meiosis not existed.
Almost everyone has heard of Charles Darwin, but hardly anyone has heard
of Alfred Wallace. Yet Charles Darwin and Alfred Wallace had essentially
the same great idea, developed from similar experiences. If Wallace were alive
today, he would not be surprised to learn that his name is nowhere near as
widely known as Darwin’s, because Wallace’s life consisted of one piece of
bad luck after another.
Darwin had been fortunate to come from a well-to-do family; his father
was a wealthy doctor and his uncle, Josiah Wedgwood, was the head of the
famous Wedgwood pottery manufacturers. As a result, Darwin could afford
to take a position as unpaid naturalist on the H.M.S. Beagle when it sailed for
the Galapagos Islands off Ecuador in 1831. Five years in this extraordinary
locale gave Darwin many of the ideas that were later to become part of his
epic book, The Origin of Species.
Alfred Wallace, however, had to earn a living, and so he became a
surveyor. He later decided that he would rather try to make a living doing
what he liked. In order to make a living as a naturalist he set sail in 1848 for
South America to collect rare species. Four years of observing the profusion
of life in the Amazon valley led Wallace to many of the same ideas that had
occurred to Darwin. Unfortunately, after he had assembled his collection, he
departed for England on a boat carrying a load of resin, a highly flammable
substance. With typical Wallace luck, the boat caught fire, and the results of
his four years of collecting were totally destroyed.
The parallels between their two lives were to continue. Both arrived back
from South America with ideas about how species came about, but both
initially could not conceive of a mechanism to generate new species. After
L ife 129
The first dinosaur fossil was discovered in 1822 in England by Mary Ann
Mantell. Twenty years later the British fossil hunter Richard Owen coined
the term “dinosaur” from the Greek words meaning “terrible lizard.” In
1854, Owen prepared an exhibit of dinosaurs for the Crystal Palace in Lon-
don. This exhibit captured the public’s fancy, beginning a love affair that
130 T H E M I L E S T O N E S O F S C I E N C E
shows no sign of abating, as the box office receipts for Jurassic Park and its
sequels clearly demonstrate.
The first dinosaurs appeared approximately 225 million years ago, and
were the dominant life-form on Earth for over 100 million years. At the end
of the Cretaceous period, some 65 million years ago, the dinosaurs abruptly
vanished. It was one of the great mysteries of paleontology: what killed the
dinosaurs?
Numerous theories were advanced to explain their disappearance. One
possibility was that the newly evolved mammals, smaller and faster, ate the
eggs of the dinosaurs in such numbers that the dinosaurs perished. A weak-
ness of this theory was that crocodiles, which also lay eggs, survived. Another
possibility was graphically depicted in the Disney movie Fantasia, in which
the dinosaurs died of thirst under a scorching sun. A variant of this was the
explosion of a nearby supernova, which showered the Earth with ultraviolet
radiation that proved lethal to the dinosaurs. It was also known that massive
volcanic eruptions occurred shortly before the extinction of the dinosaurs;
perhaps this was in some way related.
In 1980, Walter Alvarez was examining a site in Italy, where the boundary
between the Cretaceous and Tertiary periods (known as the K-T boundary)
was clearly evident. Walter asked his father, Luis, to help him analyze a
thin clay layer that marked the dividing line between the two periods. Luis
Alvarez was well-positioned to perform this task, as he was a physicist who
had won a Nobel Prize and was at the University of California at Berkeley,
where he had access to the necessary equipment. With the help of Frank
Asaro and Helen Michel, they discovered that the clay had a much greater
concentration of iridium than normal, and this iridium concentration was to
characterize the K-T boundary throughout the world.
The Alvarezes, Asaro, and Michel proposed that a large asteroid or
comet, enriched in iridium, had collided with the Earth just at the end of
the Cretaceous. The dust thrown up by this collision had stayed in the at-
mosphere, blocking the passage of sunlight and preventing plants from pho-
tosynthesizing, thus destroying the crucial first link in the food chain. With
the cessation of plant growth, many species would be forced into extinction.
This attractive theory generated widespread appeal, but much work
still had to be done to verify it. One vital link would be the discovery of a
large impact crater at the appropriate time, and one such candidate has been
found in the ocean off Mexico. Even though this theory has not yet been
completely accepted—volcanic eruptions still have their supporters among
paleontologists—it has forced a rethinking of the role played in the history of
the Earth by catastrophes such as meteor collisions. These catastrophes may
L ife 131
well have prompted many of the great mass extinctions that have occurred
since the development of life on Earth.
This theory not only may explain the demise of the dinosaurs, but may,
in some small way, have helped to prevent the extinction of man. After this
theory was proposed, several scientists suggested that a nuclear war might
well produce a similar effect, throwing enough dust and soot into the at-
mosphere to lower the global temperature significantly. This would have an
extremely adverse impact on plant growth. This “nuclear winter” scenario
was (and still is) the subject of serious investigation and debate, and the real
possibility of such an occurrence may well have lowered the potential for a
nuclear war. Now, however, it is the other side of the coin—the possibility of
warmer weather produced by the greenhouse gases accompanying the burn-
ing of fossil fuels—that is of greater concern to the scientific community.
CHAPTER 7
have pink offspring, pink being the “average”’ of red and white. On return-
ing to the monastery, Mendel pursued a different viewpoint. He hypoth-
esized that each inherited characteristic (now called “genes”), such as size or
color, came in two varieties, which he called dominant and recessive. If an
organism inherited a dominant gene from either parent, then that dominant
gene would be expressed in the organism. Only if the organism inherited
recessive genes from both parents would it display recessive characteristics.
To illustrate this principle, Mendel raised pure-bred tall and short peas,
and then crossed them. The first-generation hybrids were all tall, each having
inherited a dominant tall gene and a recessive short one. When the hybrids
were crossed, roughly three-quarters of the plants were tall, and one-quarter
were short; none were of intermediate height. Mendel constructed a math-
ematical model for this, pointing out that there were four possible second-
generation plants. One would have inherited a tall gene from both parents,
one would have inherited a tall gene from the male parent only, one would
have inherited a tall gene from the female parent only, and one would have
inherited a short gene from both parents. Since the tall gene was dominant,
three-quarters of the second-generation plants should be tall.
These brilliantly constructed experiments, along with Mendel’s expla-
nations, were published in the Journal of the Brno Natural History Society,
where they were immediately ignored by the scientific community. Mendel
attempted to publicize them by sending copies of the article to several distin-
guished scientists. He might have continued his efforts along this line, but a
rather unexpected development occurred: he was elected abbot of the mon-
astery. Mendel took his duties seriously and mostly abandoned his research
for the remainder of his life.
Then, in 1900, a remarkable coincidence occurred. Three investigators
(Hugo de Vries, Karl Correns, and Erich von Seysenegg) independently
rediscovered Mendel’s results. They each instituted a thorough search of
the literature, and all three discovered Mendel’s prior work in the obscure
journal in which it was published. In the best scientific tradition, when they
published their results, they all gave credit to Mendel.
Mendel apparently did have some interest in pursuing his work on ge-
netics, and after he was elected abbot of his monastery he tried to extend his
method of experimentation with pea plants to the animal kingdom, choosing
to work with bees. As a result, he developed a hybrid bee, which gave excel-
lent honey. Unfortunately these bees were extremely ferocious, being much
more prone to sting than the standard honeybee. The bees were subsequently
destroyed. Mendel apparently was not only the father of genetics; he was the
father of “killer bees” as well. One can only wonder if the Africanized “killer
G enetics and D N A 135
bees” that invaded southern California toward the end of the twentieth cen-
tury would be doing so if Mendel’s work in this area had been adequately
publicized.
The middle of the nineteenth century saw two great revolutions in biology—
Darwin’s theory of evolution, and Mendel’s laws of genetics—although the
existence of the latter was only unearthed at the beginning of the twentieth
century. There was still a great deal of debate concerning the validity of these
theories, and much of it centered on the evolution of new species.
According to Darwin, natural selection was the driving force behind evo-
lution. Natural selection might explain why antelope species became swifter,
as the swifter antelopes would be the ones most likely to escape predators.
Mendelian genetics could explain the proportion of blue-eyed children born
to brown-eyed parents. However, neither theory seemed able to explain how
an entirely new species, such as human beings, arose.
The Dutch biologist Hugo de Vries had shown that in one generation,
large variations in plants could produce an entirely new species. De Vries felt
that these large variations, which were called “mutations,” could explain the
existence of new species of animals. Thomas Hunt Morgan, an American
geneticist who favored de Vries’s theories, decided to test them.
To do so, Morgan worked with fruit flies. These insects had many ad-
vantages from the standpoint of genetics. They were small and they prolifer-
ated extremely rapidly, so many generations could be studied in a short pe-
riod of time. Equally important, the cells of the fruit fly contained only four
chromosomes. At the time that Morgan began his work, the chromosomes
were suspected of carrying genetic information.
However, there was a problem with this theory. Humans have only two
dozen chromosomes, yet there are thousands of inherited characteristics. If
the chromosomes did indeed carry genetic information, there must be many
136 T H E M I L E S T O N E S O F S C I E N C E
production for the German government during World War I, during which
two of his three sons were killed. Depressed by this, as well as the fact that
he was suffering from cancer, he committed suicide in 1919.
The search for the mechanism behind inheritance resembled the plot of
many of the classic movies of the last quarter of the twentieth century. It
was Raiders of the Lost Ark, with the prize the secret of life itself. It was The
Odd Couple, with as unlikely a pair of protagonists as the ones Neil Simon
constructed. It was Rocky, with the challenger going up against the reigning
champ. Throw in a little sex and a brilliant and talented woman doomed
to a tragically early death, and you have the elements of one of the great
dramas—scientific or otherwise—of all time.
The story starts, as previously discussed, with Gregor Mendel, the
Austrian monk who discovered the laws governing inherited characteristics.
Soon after, scientists began the long struggle to discover the mechanism by
which Mendel’s laws were enacted. In 1944, Oswald Avery, Colin MacLeod,
and Maclyn McCarty demonstrated that deoxyribonucleic acid (soon to be
universally known as DNA) was the substance through which inherited char-
acteristics were transmitted.
This was only the first piece of the puzzle. Still to be determined was
precisely how DNA did what it did. By 1951, the chemical composition of
DNA had been ascertained—it consisted of sugars and phosphates, which
are fairly simple compounds, and four bases: adenine, cytosine, guanine, and
thymine. It was also known that, even though different samples of DNA
might have differing amounts of the four bases, the amount of adenine was
always the same as the amount of thymine, and the amount of cytosine was
always the same as the amount of guanine.
Hot on the trail of the structure of the DNA molecule was Linus Paul-
ing, unquestionably the world’s leading biochemist. Almost complete new-
comers to the problem were James Watson, a young American postdoctoral
student recently arrived in England, and Francis Crick, a somewhat older,
British graduate student with a background in mathematics. Despite the fact
that they knew Pauling was undoubtedly the frontrunner, Crick and Watson
decided to enter the race to decipher the structure of DNA.
They had two unusual allies: Maurice Wilkins and Rosalind Franklin.
These two were trying to work out the structure of DNA by using X-ray
diffraction photographs of DNA crystals, a technique consisting of examin-
G enetics and D N A 139
ing how X-rays bounced off DNA. While these four were trying to put the
pieces together, Pauling wrote an article in which he claimed to have worked
out the structure of DNA as a triple helix. When Watson and Crick recon-
structed Pauling’s model and showed it to Franklin, she pointed out that it
disagreed with her diffraction data. Since Pauling was backing triple helixes,
Watson decided that his best bet to beat Pauling would be to experiment
with double-helix models.
One day, in a flash of insight, Watson realized that the shape of an
adenine-thymine pair would be the same as the shape of a cytosine-guanine
pair. That would account for the equality between the amounts of adenine
and thymine, and cytosine and guanine. After several false steps, Watson
came up with a double helix model incorporating these features, and Crick
made the calculations to demonstrate the feasibility of the model. Wilkins
and Franklin made X-ray diffraction computations that substantiated the
model. Proteins do most of the work in an organism, and the secret of life—
how the cells know which proteins to produce—had finally been discovered.
After learning of the double-helix model, Pauling visited Cambridge in
the spring of 1953. He acknowledged the error in his thinking that had led
to his construction of an erroneous model, and agreed that the double-helix
model of DNA was undoubtedly correct. The very next year, Linus Pauling
would win the Nobel Prize in Chemistry for previous work done on chemical
bonds. Crick, Watson, and Wilkins would share the Nobel Prize in Physiol-
ogy or Medicine in 1962. Franklin, whom all agreed had made substantial
contributions, had tragically passed away from cancer at the age of thirty-
eight, and was thus ineligible to share in an award that she richly deserved.
When Watson and Crick constructed the double-helix model for DNA in
1953, they also noted that the proposed structure would account for the
process of cell duplication. Each strand of the double helix would unwind,
and each of the approximately 4 billion bases on a strand would seek its
complementary partner (adenines with thymines, cytosines with guanines)
to reconstruct the other strand of the helix. Then, as a cell split into two,
each strand would be able to generate a new molecule of DNA for each of
the daughter cells.
The mechanism of heredity was now known. What was not known was
how the molecule of DNA directed the essential process of life: the manu-
facturing of proteins. The genes that Mendel had described a century earlier
140 T H E M I L E S T O N E S O F S C I E N C E
were now known to have precise locations on a molecule of DNA, and each
gene had a specific function: to manufacture a particular protein. Somehow,
the sequence of bases in DNA directed the manufacture of proteins.
The obvious idea was that the 4-billion-long sequence of bases was a
kind of computer program directing how proteins would be manufactured.
Proteins were manufactured in a portion of the cell called a ribosome. Pro-
teins themselves are chains of simpler chemicals called amino acids; every
protein, from the hemoglobin in our blood to the pigments in our eyes, is
constructed in an assembly-line fashion from a fundamental stock of twenty
different amino acids.
The construction process actually involves more than just DNA itself.
When the two strands of the DNA helix separate, a strand may be used as
a template for the construction of another DNA molecule, but it may also
be used to construct a molecule known as messenger RNA (abbreviated
mRNA). The mRNA molecule is what is actually read by the ribosome, and
differs from the DNA molecule in that the base uracil is used in place of
thymine. As a strand of mRNA is fed to the ribosome, the ribosome sees a
sequence of approximately 4 billion bases, a section of which might look like
. . . UCGAGGUUCA . . . How does the ribosome know what to do with
this sequence of letters?
The simplest idea would be that the ribosome sees a group of letters as
a word, and each word as an instruction to attach a specific amino acid to
the growing protein chain. There were only sixteen (4 × 4) different possible
two-letter words made from the letters A, C, G, and U (AA, AC, AG, AU,
. . . UA, UC, UG, UU). Since there were twenty different amino acids used
in proteins, there would not be sufficient words in the instruction set to spec-
ify all the amino acids. There were sixty-four (4 × 4 × 4) different three-letter
words, which would later be known as codons, that could be made from the
letters A, C, G, and U, so the next step was to see whether a particular codon
caused a specific amino acid to be added to the protein chain.
Marshall Nirenberg, a biochemist at the National Institutes of Health,
was one of many scientists working on this hypothesis. In 1962, he managed
to construct a section of mRNA consisting only of uracil bases. When this
section of mRNA was read by a ribosome, it added the amino acid phenyl-
alanine to the protein chain. The first codon in the genetic code had been
deciphered: UUU stood for phenylalanine. Nirenberg would win the Nobel
Prize in 1968, and within a few years, the genetic code had been completely
deciphered.
One of the beautiful aspects of science is that information uncovered in
one area can easily have ramifications in other areas. There is no a priori reason
G enetics and D N A 141
why each species cannot have its own genetic code: UUU could in theory
code for phenylalanine in aardvarks, leucine in roses, and glycine in zebras.
In reality, it does not work out that way: UUU codes for phenylalanine in
all forms of life. This is powerful evidence for the theory of evolution, as the
obvious way for this to occur is that the genetic code originally evolved in
the simplest life-form and was passed from organism to organism even as the
other genetic changes constituting evolution were taking place.
As might be suspected from the acronyms, DNA and RNA are closely re-
lated. The structure of DNA consists of a double-helix backbone to which
are affixed four bases: adenine, cytosine, guanine, and thymine. The struc-
ture of RNA is quite similar, but it uses uracil instead of thymine. The chief
functional difference is that, although DNA may get all the accolades, when
it comes to the actual manufacture of the proteins coded for by the DNA,
RNA does most of the work.
The existence of RNA has been known since the early portion of the
twentieth century, and its importance in genetics long suspected. Indeed,
even as James Watson and Francis Crick were on the verge of deciphering the
structure of the DNA molecule, they were speculating on the role of RNA
in the manufacture of proteins. They conceived of a sequence of operations
that has come to be known as the central dogma of molecular biology. The
DNA in the nucleus of the cell would act as a template for the manufacture
of RNA. The RNA would then go from the nucleus of the cell to the cyto-
plasm, where it would direct the manufacture of proteins. This picture of the
roles of DNA and RNA has been for the most part confirmed.
However, RNA has been discovered to play many other roles in the
drama of life. The simplistic picture that DNA is copied to RNA, which
is mechanically reproduced to create proteins, has had to be altered. The
first major change in this picture was discovered by the French biochemists
François Jacob and Jacques Monod, who discovered that chemical signals
inside the cell determine whether or not the instructions within a gene are
copied into RNA. This showed that the copying process from DNA to RNA
involves editing as well.
A major part of editing, both in a manuscript and in a genetic molecule,
is the removal of unusable material. The unusable material, called “introns,”
are cut out of the RNA message. The spliced segments must be rejoined,
142 T H E M I L E S T O N E S O F S C I E N C E
and this is accomplished by another form of RNA; these molecules are called
“spliceosomes.”
It still remains for the proteins to be constructed. This is accomplished
by yet another form of RNA, called transfer RNA, whose job it is to read
the final edited RNA message, fetch the required amino acids from the cyto-
plasm, and string them together to form the appropriate protein.
The threat of AIDS (acquired immunodeficiency syndrome) that
emerged during the last two decades of the twentieth century brought about
intensive study of the human immunodeficiency virus (HIV) that had been
shown to be the cause of the disease. HIV is one of a class of viruses called
retroviruses. A retrovirus is basically a strand of RNA encased in a coat of
protein. Like other viruses, it cannot reproduce on its own, and must com-
mandeer the genetic machinery of a cell to perform this task. In 1970, the
American molecular biologist Howard Temin showed that retroviruses
reproduce in a host cell by using the enzyme reverse transcriptase that is
present in a cell to copy the RNA to DNA. The cell then copies the DNA
into the RNA genetic material for the retrovirus, and also uses the DNA to
produce the protein coat in which the strand of RNA is encased.
Which came first, the chicken or the egg? In molecular biology, the
question is, which came first, DNA or RNA? Because RNA basically reads
DNA, the prevailing impression early on was that RNA-based life was im-
possible. However, Thomas Cech showed that RNA could act as a catalyst
and initiate changes on itself. This brought about the possibility of an RNA
world, in which a simpler RNA-based life existed prior to DNA. In this
scenario, DNA evolved later, and the more complex central dogma enabled
more complex life to evolve.
GENETIC ENGINEERING
The attempt to alter existing life-forms for the good of man has been going
on almost as long as recorded history. Breeding horses, dogs, and cattle date
from well before the birth of Christ, and the attempt to produce new variet-
ies of plants or ones that will grow better probably started at the same time
that agriculture was developed.
In a sense, all of the above qualify as genetic engineering, but it is genetic
engineering on a somewhat haphazard level, relying on the mechanism of
chance to produce an improvement, and choice (known as artificial selec-
tion) to use this improvement. When Watson and Crick discovered the
structure of DNA, the genetic material whose instructions are followed by
G enetics and D N A 143
the cell to produce proteins, it was immediately realized that the ability to
manipulate DNA would greatly increase the power to improve existing life-
forms or create new ones.
In 1968, Werner Arber, a Swiss microbiologist, was investigating a fam-
ily of viruses called bacteriophages. These viruses actually eat bacteria (phage
is Greek for “eat”). In the eternal struggle that characterizes natural selection,
some bacteria are able to defend themselves against bacteriophages by pro-
ducing a substance that prevents the growth of the viruses. Arber was able
to show that the substance was an enzyme that actually cut the DNA of the
bacteriophage at a specific location. Arber realized that this substance, which
he called a restriction enzyme, located a specific sequence of molecules in the
DNA strand, and would work nowhere else. These restriction enzymes were
tools for cutting DNA at a precise spot, and are the workhorses of genetic
engineering.
Five years later, Herbert Boyer and Stanley Cohen of the University of
California at San Francisco performed the basic experiment that launched
the genetic engineering revolution. Working with the common bacteria
E. coli, they isolated a plasmid (a circular loop of DNA) containing genes en-
abling the bacteria to resist certain antibiotics. They used restriction enzymes
to cut the DNA, and then inserted sections of other plasmids. This created
a new plasmid, which contained genes from both of the original plasmids.
When this plasmid was reinserted into an E. coli bacterium, the bacterium
could be shown to display the genetic characteristics of both plasmids from
which the engineered plasmid had been produced.
We are a half-century into the science of genetic engineering, yet it is
already a multibillion-dollar industry with the promise to transform the
world. The gene is the unit of inheritance. Genetic engineering holds the
promise of producing plants that can manufacture their own fertilizer, flower
more frequently, and ripen faster. On the human level, genetic engineering
may make it possible to prevent a large number of so-called genetic diseases,
which are caused by the inability of a gene to perform a desired function.
Of all the developments in this book, this is the one that is most likely to
have a major impact on the ultimate development of the human race. Other
developments will make it possible for us to change the Universe, but genetic
engineering could make it possible for us to alter the evolution of humanity.
In her novel Frankenstein, Mary Shelley gave the world the caricature
of the mad scientist, out to experiment with forces beyond his control. In
general, scientists are much more conscious of the consequences of their ac-
tions than Shelley had suggested. Immediately after the discovery of genetic
engineering, the biochemists met and instituted an extremely stringent set
144 T H E M I L E S T O N E S O F S C I E N C E
Our curiosity about the nature of our bodies begins almost immediately after
we are born, and continues throughout our lives. Our bodies are astounding
in so many ways—the conscious ways we can direct them, the systems that
billions of years of evolution make function so well—systems of which we
are largely unaware. What is also amazing is the range of human capability,
such as the abilities of a concert pianist or a topflight athlete. This range
also includes the incredible accomplishments of the human brain—some of
which are in this book.
Our bodies do many things well—but, as many have observed, there is
no physical ability that humans possess that is not exceeded in some other
species. We’re generally faster than turtles, and slower than cheetahs. But no
other species has the brain that enables us to surmount our inabilities—and
enrich our lives, a concept unknown to any other species.
ANATOMY
Considering how little was actually known about the body and the nature
of disease in ancient Greece, it is amazing how rational some of the ancient
physicians appear by modern standards. It is surprising to learn that the say-
ings, “One man’s meat is another man’s poison,” and “Desperate diseases
require desperate remedies,” are actually attributed to Hippocrates, the father
of medicine.
Hippocrates was in fact better known to the ancients because he founded
a school of medicine, rather than because he was a doctor. Perhaps his
greatest contribution to medicine was not the Hippocratic Oath, but rather
the view that disease was a physical phenomenon, rather than the result of
having incurred the wrath of some deity. However, the next great physician
would not appear until several hundred years after Hippocrates’s death. That
physician was Galen, who climbed the ladder of professional success until he
became court physician to Emperor Marcus Aurelius.
Galen was the greatest anatomist in the ancient world, but he had the
misfortune to live in an era in which human dissection was no longer being
practiced. As a result, Galen would dissect animals, observe what he could,
and generalize to human beings. He was the first to identify many of the
major muscles, and also showed the role of the spinal cord by severing it in
animals and noting the ensuing paralysis.
Galen was certainly the most influential physician of his time. His ex-
tensive writings were carefully preserved throughout the Dark Ages, partly
because his religious views were in line with the Christian thought that
everything in the Universe was designed for a purpose. When the search for
new knowledge ceased shortly after the fall of Rome, Galen represented the
state of medical art, and remained the unquestioned authority in the field for
more than a thousand years.
By the sixteenth century, though, there was a new willingness to doubt
some of the previously unquestioned authorities. One such doubter was
Andreas Vesalius. Though raised in France, he had relocated to Italy, where
there was a greater spirit of intellectual freedom. One consequence was that
T he H uman B ody 149
The crucial role of the heart has been intuitively understood since the earliest
days of recorded history. In ancient Egypt, the fate of the soul was thought to
be determined by the weight of the heart. Egyptian priests would weigh the
heart of the dead on a scale against a feather, believing that those who had
hearts that were not “heavy with sin” went on to happiness in the afterlife.
Aristotle, one of the greatest intellectual giants in history, thought that
the heart was the seat of the soul, and attributed mystical powers to it. In
130 CE, Galen, personal physician to the Roman Emperor, advanced the
concept of a circulatory system through which blood flowed from the body
150 T H E M I L E S T O N E S O F S C I E N C E
to the heart and back to the body. However, medical research did not occupy
an important place in Roman society, and when Rome collapsed and the
Dark Ages began, medical research was basically “put on hold” throughout
Europe.
As Europe emerged from the Dark Ages, people began displaying a
greater interest in medicine, spurred on by the carnage wrought by the Black
Death. Once again, though, interest in the functions of the human body ran
afoul of the proscriptions of the Church. Human anatomical investigations
and drawings were strictly forbidden—indeed, Leonardo da Vinci had been
forced to steal corpses in order to make his accurate anatomical drawings.
The Italian anatomist Vesalius had to have his groundbreaking book on hu-
man anatomy printed in Switzerland in order to avoid being condemned,
excommunicated, or worse by the Italian authorities. This fear was well-
founded. When Miguel Servetus published a book containing conjectures
on the role of the heart in pumping blood, he was burned at the stake by the
Spanish Inquisition, with a copy of his book tied to his body.
At the start of the seventeenth century, William Harvey, a well-to-do
Englishman who had studied at Cambridge, went to Padua to study medi-
cine, as Padua had for three centuries housed the finest medical school in
Europe. While Harvey was in Padua, Galileo’s experiments in mechanics and
astronomy were setting a new standard for science. On his return to England,
Harvey decided to apply Galileo’s methods to the study of the heart and the
circulation of the blood.
His principal tool was dissection; in his attempts to understand the heart,
he is said to have dissected over eighty species of animals. Harvey determined
that the heart is a muscle, and that it operated by contraction. He calculated
the rate at which the heart pumped blood, and determined that in one hour
the heart pumped a quantity of blood that was about three times the weight
of a human being. Since it seemed impossible to construct a mechanism that
would destroy and recreate blood at such a rate, the obvious conclusion was
that the blood was being circulated throughout the body.
Harvey further noted that the valves in the arteries and the veins were
one-way valves, and then observed that blood flowed away from the heart
through the arteries, and toward the heart through the veins. Although his
results initially met with substantial opposition from the medical establish-
ment, Harvey himself refused to debate the matter, publishing a book on
the subject and letting the facts speak for themselves. Within a generation,
his results were universally accepted, and the groundwork for the study of
physiology had been established.
T he H uman B ody 151
excitedly when struck by an electric spark. In a sense, this was not altogether
surprising, because live muscles were known to twitch when subjected to
electricity. Nonetheless, this discovery was to inaugurate many different lines
of research, one of which we have already seen in chapter 5.
One of these lines of research, which had been dormant for two thou-
sand years, was on the interaction of nerves and muscles, and the nature of
the nervous impulse. In 1826 Johannes Müller, a German biologist, was
experimenting with sensory nerves. It was well-known by that time that
light stimulated the optic nerve, and that this stimulus was interpreted by
the brain as visual brightness. Müller discovered that if the optic nerve was
stimulated by electricity, that stimulus would still be interpreted as visual
brightness. He later showed that this was true for any type of nerve; no mat-
ter how it was stimulated, the brain would always interpret it the same way.
Not only did this indicate that nerves functioned through electrical impulses,
it greatly simplified the investigation of the nervous system.
A century later, it would be discovered that the nervous system was not
simply an electrical circuit. Otto Loewi, a German physiologist, was study-
ing the nerves of a frog’s heart. He had discovered that certain chemical
substances were released when the nerve was stimulated. One morning he
awoke at 3:00 a.m. with the idea for a brilliant experiment, which he then
wrote down. The next morning he couldn’t read his own handwriting! That
evening, he again woke up at 3:00 a.m., and corrected his previous error by
immediately going to the laboratory and conducting the experiment, which
was to take the chemical substances that had been released and show that
these had the power to stimulate heart muscle without the intervention of a
nerve signal.
During the previous decade, the British biologist Henry Dale had been
working on fungi, and had isolated a compound called acetylcholine. This
substance had an effect on organs similar to the effects produced by recep-
tion of nerve impulses. When Dale read of Loewi’s experiment, he was able
to show that the substance Loewi had discovered was acetylcholine. Acetyl-
choline was the first neurotransmitter, a class of chemical compounds that
inhibit and excite the transmission of nervous impulses.
Neurotransmitters have been shown to be important components of
behavior. In 1972, a team of medical researchers discovered that bipolar dis-
order (often called “manic depression”) is the result of an imbalance between
two types of neurotransmitters. As a result of this discovery, it has been pos-
sible to treat several types of behavioral disorders by chemical means.
For their work on acetylcholine, Loewi and Dale received the Nobel
Prize, which probably saved Loewi’s life! Loewi was Jewish, and when Hitler
T he H uman B ody 153
invaded Austria, he was arrested. However, the Nazis may have realized that
it would have been bad public relations to execute an eminent scientist, and
Loewi was allowed to leave the country provided he turn over his share of the
Nobel Prize money to the Nazis.
The immune system is one of the great mechanisms of survival. Even before
AIDS and COVID-19, the importance of the immune system was clear. The
immune system is the body’s intricately organized defense against foreign
invasion, and learning how the immune system works, how to strengthen it,
and what its weaknesses are has been and will be critical to the advancement
of medicine.
The basic mechanism of the immune system is that, once it is exposed
to a foreign substance, it learns how to manufacture defenses against it.
Edward Jenner unwittingly exploited this mechanism when he inoculated
people against smallpox by giving them a mild case of cowpox. Once the
germ theory of disease became accepted, progress in the understanding of the
immune system accelerated.
One of the first great developments in understanding immunity was
the result of an experiment in 1890 by Emil von Behring and Shibasaburo
Kitasato. They injected guinea pigs with blood from other guinea pigs known
to be immune to diphtheria, and observed that the injected animals acquired
that immunity. As a result, they concluded that immunity was conferred by
protective substances in the blood, which von Behring called antibodies.
Paul Ehrlich, the originator of chemotherapy, was inclined to chemical
explanations for all biochemical phenomena. He suggested that an anti-
gen, a substance that provoked a reaction from the immune system, had
a specific molecular structure, and that the antibody manufactured by the
immune system fit the antigen much like a key fits a lock. This insightful
theory was later confirmed by Karl Landsteiner, who showed that in order
to combat a specific antigen, the immune system manufactures a specific
antibody. Landsteiner not only confirmed Ehrlich’s theory, but he also used
the antigen-antibody reaction to develop the system of blood typing that we
use today. Landsteiner’s blood typing makes it possible to give blood transfu-
sions without provoking an undesirable response from the immune system.
The antibody–antigen reaction is not always beneficial. Allergic reac-
tions occur when the immune system produces antibodies to substances that
are not intrinsically harmful. Skin grafts and organ transplants are frequently
154 T H E M I L E S T O N E S O F S C I E N C E
met with a reaction known as rejection; the body tries to destroy the graft or
the transplant. Peter Medawar showed that rejection was an immune system
response to the new material. In studying this phenomenon, Frank Burnet
observed that a developing organism produced antibodies only in response to
antigens that it encountered later in its life, and suggested that the immune
system ignores antigens it encounters early in life. Although Burnet was un-
able to prove this, Medawar was able to do so.
Burnet eventually devised the clonal selection theory to explain immune
response. When a lymphocyte initially encounters an antigen, it multiplies
and produces identical lymphocytes, which manufacture the antibody neces-
sary to counteract that particular antigen. Later research has proved that Paul
Ehrlich, possessed of one of the keenest chemical intuitions, was correct: an
antigen recognizes an antibody by identifying specific patterns on the surface
of antigen molecules.
The 1980s saw the onset of the AIDS epidemic. AIDS is an insidious
disease caused by HIV (human immunodeficiency virus). Viral diseases are
especially hard to combat because they are often impossible for the immune
system to detect. A virus is simply a strand of genetic material enclosed in
a coat of protein. The coat itself is innocuous; but when the virus gets in-
side a cell, the genetic material of the virus can commandeer the cell’s own
genetic machinery and reproduce the virus. Worse, the HIV kills cells. The
year 2019 brought with it the COVID-19 pandemic, also a viral disease,
but fortunately one for which it was possible to develop a vaccine. HIV has
many more evasive strategies available to it than the COVID-19 coronavirus,
but as of this writing there is an ongoing trial for an AIDS vaccine based
on the same methodology as the successful mRNA vaccines used against
COVID-19.
The importance of blood to life has been known for thousands of years. To
the Greeks, who formulated the first theory of living organisms, blood was
one of the four humours, along with phlegm, black bile, and yellow bile. It
was obvious to even a child that blood was important, but for thousands of
years, no one knew what function blood performed in an organism.
It was known that blood was not a simple liquid. A container of blood,
left unattended, would separate into a red liquid and a pale yellowish fluid,
separated by a thin layer of white. The first great breakthrough in discovering
the nature of blood could not be made until the seventeenth century, after
the microscope had been invented. Although the Dutch microscopist Jan
Swammerdam had observed red blood cells in frogs as early as 1658, he did
not publish his results. Fortunately for science, Anton von Leeuwenhoek had
observed red blood cells in human blood, and described his results in 1673.
The role that red blood cells played in the body was not discovered for
almost two centuries, when the idea of counting blood cells as a measure of
health was devised by François Magendie. This was a period right after the
vitalistic view of organisms had been overturned, and researchers were quan-
tifying many aspects of the diagnostic procedure.
Along with improved diagnosis came improved analytical methods.
One of the chief new analytical tools of the era was the spectrometer, an
instrument whose impact on both chemistry and physics has been profound.
While astronomers were attaching spectrographs to telescopes to decipher
the composition of distant stars, biochemists were using them to analyze
many of the substances that are found in organisms.
One of the leaders in this area was a German biochemist, Ernst Hoppe-
Seyler. It was Hoppe-Seyler who gave the name “hemoglobin” to red blood
cells, and who demonstrated that the function of red blood cells was to
transport oxygen. Hoppe-Seyler crystallized hemoglobin (many proteins can
be crystallized), and also was the first to notice the sinister fact that carbon
monoxide could readily be transported by hemoglobin. He was also the first
to notice chemical similarities between hemoglobin and chlorophyll.
Hemoglobin is an extremely complicated molecule. Its exact structure
could not be determined until 1960, when Max Perutz used X-ray crystal-
lography, high-speed computers, and the ingenious trick of adding a single
heavy atom of gold or mercury to the molecule to help clarify its structure.
Anyone who observes the molecular structure of hemoglobin and chloro-
phyll cannot help but come to the conclusion that, because their forms are
so similar, their functions must be as well.
156 T H E M I L E S T O N E S O F S C I E N C E
time the dog was shown food. Eventually, the dog would not only salivate in
response to the food, but in response to the bell. This is known as a condi-
tioned reflex, and conditioned reflexes would play a key role in the behavior-
ist theories of psychology.
Pavlov’s earlier studies came to the attention of William Bayliss and
Ernest Starling, two English physiologists. Pavlov’s earlier results had led
Pavlov to believe that many digestive reactions were controlled by the ner-
vous system. Bayliss and Starling studied how the pancreas began to secrete
its digestive juice when acidic food contents passed from the stomach to the
intestine.
Bayliss and Starling tried to confirm Pavlov’s hypothesis by cutting the
nerves to the pancreas. To their surprise, the pancreas continued to secrete its
digestive juice. Further investigation revealed that the lining of the small in-
testine secreted a substance, which they called secretin, when it was exposed
to stomach acid. It was secretin that stimulated the pancreas to react.
Starling realized that there were other instances of similar behavior, and
coined the word “hormone” (from the Greek, meaning “to rouse to activ-
ity”) to describe a substance released into the blood by one organ to prompt
a response in another organ.
The work of Bayliss and Starling cleared the way for the recognition of
diseases occurring from a hormone deficiency. Several years later another
English physiologist, Edward Sharpey-Schafer, theorized that the pancreas
produced a hormone that lowered the level of glucose in the blood. He
named this hormone insulin, from the Latin word for island, as he believed
it to be produced in the island cells of the pancreas. Within twenty years, the
Canadian team of Frederick Banting and Charles Best devised a procedure
for extracting a crude version of insulin from animals, and regular insulin
treatments are now the standard method of controlling diabetes.
Banting devised the original experiments, and persuaded John Macleod,
a physiology professor at the University of Toronto, to give him some labora-
tory space and find him a coworker. Macleod agreed, gave Banting the nec-
essary space, and found Charles Best to work with him, and then promptly
went off on a summer vacation. Banting and Best completed their work in
1922, and in 1923 Banting was one of two Canadians to share the Nobel
Prize in medicine and physiology—the other being not Best, but Macleod!
Banting was incensed, and almost refused to accept the Prize unless Best
would share in it. He was unable to achieve this, but when he finally relented
and accepted the Prize, he gave half of his share of the money to Best.
158 T H E M I L E S T O N E S O F S C I E N C E
They met, fell in love, and married. Both were interested in the same aspect
of science, and so they decided to work together. Because of the prejudice of
the world of the early twentieth century against female scientists, she found it
difficult to get employment in her chosen field. Nonetheless she persevered,
and eventually managed to work alongside her husband. Their work was of
such high quality that it was deemed worthy of a Nobel Prize.
It certainly sounds like the story of Pierre and Marie Curie, but it is also
the story of Carl and Gerty Cori, whose lives (and names) parallel the Curies
in many important aspects.
All living cells share certain characteristics. One of these is the ability
to metabolize; to take in substances and use those substances to create both
energy and new substances. One of the most important types of metabolism
involves carbohydrates.
Over the past few decades, there has been a drastic change in the diet of
an athlete prior to an important event. Athletes used to have steak and eggs;
now they have pancakes or pasta. There is a sound reason for this. Pancakes
and pasta are carbohydrates. Not only are they more quickly metabolized
than protein-rich foods such as steak, but about half the carbohydrates are
stored in the liver and muscles in the form of the chemical glycogen. The
remainder is either stored as fat or burnt as fuel.
The biochemist Otto Meyerhof determined that, when a muscle con-
tracts, the glycogen it has stored is converted to lactic acid (lactic acid is the
source of the burning sensation to which the exercise instructor refers when
he or she says, “Feel the burn!”). The lactic acid is then resynthesized into
glucose, forming a cycle that enables the muscle to continue to contract. The
work for which the Coris received the Nobel Prize took place over the course
of many years, and consisted of a detailed analysis of this glycogen-to-lactic-
acid-to-glycogen cycle.
While the Coris were working out what happened to the pancakes and
pasta, the German biochemist Hans Krebs was doing the same thing for
steak and eggs. Proteins are strings of amino acids, and Krebs discovered that
metabolism removes the nitrogen atoms from the amino acids, eliminating
them in the form of urea, the organic chemical first synthesized by Friedrich
Wöhler.
The Coris, born in Czechoslovakia, had immigrated to the United States
because of employment opportunities, but the rise of the Nazis in Germany
compelled Krebs to immigrate to England. Like the Coris, Krebs became
interested in carbohydrate metabolism. The Coris had shown that glycogen
T he H uman B ody 159
was converted to lactic acid without using oxygen, but released very little
energy. Krebs decided that the remainder of the energy must be generated by
chemical reactions that broke down the lactic acid and used oxygen, releasing
water and carbon dioxide in the process.
This analysis took Krebs over five years to complete. One of the key
intermediate products was citric acid, the same chemical that produces the
slightly sour taste in orange or grapefruit juice. The cycle Krebs discovered is
called the citric acid cycle, also known as the Krebs cycle. Later investigation
revealed that the Krebs cycle is also responsible for the way in which fats are
metabolized. The Krebs cycle is the major source of energy production in all
living organisms.
The Curies were the first husband-and-wife team to win a Nobel Prize,
and their daughter, Irene Joliot-Curie, was part of the second such husband-
and-wife team. The Coris were the third, receiving their Nobel Prize in
1947. Since then, the environment for female scientists has improved sub-
stantially, and several have won Nobel Prizes—but no husband-and-wife
teams. One reason might be that female scientists are much more numerous
and much more a part of the scientific community than they were in the first
half of the twentieth century, and so have a much wider choice of colleagues.
CHAPTER 9
Disease
Every so often, someone writes a book on science for the ages. The Microbe
Hunters, written by Paul de Kruif and published almost one hundred years
ago, is such a book. It describes the efforts of the early bacteriologists, includ-
ing such legends as Louis Pasteur, as they landed the first real blows in the
fight against disease.
Disease is the common enemy of every man, woman, and child—and it
has only been in the last two or three centuries that we have come to recog-
nize that disease is the result of natural causes rather than our having incurred
the displeasure of the gods. Almost every generation for the last couple of
centuries lives longer and enjoys better health than the generation that pre-
ceded it, and that is due in large measure to the efforts of the scientific and
medical communities to understand, treat, and prevent disease.
Understanding Disease
It is impossible to imagine the terror that must have accompanied some of
the great plagues of the past, such as the Black Death, which killed between
75 and 200 million people in Europe, Asia, and Africa between 1346 and
1353. In the fourteenth century, people had no idea what caused it, how to
cure it, or how to prevent it. In 1894, Alexandre Yersin and Shibasaburo
Kitasato independently discovered that the disease was caused by bacteria
carried by fleas living on infected rats. There are few cases of bubonic plague
today, but the antibiotic streptomycin, discovered in 1943 by a team of bio-
chemists headed by Selman Waksman, is an effective treatment. Whenever
161
162 T H E M I L E S T O N E S O F S C I E N C E
a case of bubonic plague is detected today, the public is warned to stay away
from a particular area, which generally harbors plague-infected rodents.
water from the Southwark and Vauxhall Company, which got its water from
the sewage-contaminated Thames, were nine times more likely to result in
cholera fatalities than areas supplied by the Lambeth Company, which got
its water upstream.
The most dramatic single bit of evidence concerned the Broad Street
pump. Snow, who was familiar with the Soho area of London from his own
practice, kept a complete record of the homes of those who perished of chol-
era. He noticed that more than five hundred fatalities occurred within a few
hundred yards of the Broad Street pump. Snow discovered that a sewer pipe
passed within a few feet of the well. After he managed to persuade the par-
ish authorities to remove the pump handle, the fatalities were substantially
reduced.
Although statistics had been invented more than two centuries earlier,
Snow was the first to realize its potential value in medical applications. The
man who was known as the first anesthesiologist was the founder of epide-
miology as well.
Despite the fact that Pasteur had not yet established the germ theory,
nor had Robert Koch shown that specific diseases were caused by specific or-
ganisms, Snow’s experiences led him to believe that cholera was caused by a
specific germ that lived and multiplied in water. With remarkable insight, he
also recommended numerous public health procedures that are still standard
practice: decontaminating soiled articles of clothing, washing hands, and
boiling cooking utensils to sterilize them. Perhaps he was also the founder
of public health.
The great actors or actresses who spend a career in the theater usually follow
a standard pattern. They start life with small parts and work their way up to
supporting roles. Then comes a period of leading roles, followed by an in-
evitable regression into character parts. In a lifetime in the theater of science,
Louis Pasteur never abandoned the leading role.
Pasteur began his career as a chemist. His first notable achievement was
to demonstrate that different forms of crystals of the same compound were
capable of polarizing light in different directions. His work on this subject
sparked his interest in microorganisms, and the rest of his extraordinary ca-
reer would be devoted to investigating these creatures.
In several instances, his investigations had profound economic ef-
fects. He discovered that the souring of both wine and milk was caused by
164 T H E M I L E S T O N E S O F S C I E N C E
displease you, there beats a heart full of affection for you.” Searching for a
comparison with which to convince the young lady of the depth of his feel-
ings, he added, “I, who have been so in love with my crystals.” This had the
desired effect—Mademoiselle Laurent married him, and became not only
his lifelong companion but doubled when needed as laboratory assistant,
secretary, and coauthor.
When Robert Koch met Emmy Fraatz, he was a romantic, a recently gradu-
ated doctor who wanted to roam the world, perhaps as a military physician
or a ship’s doctor. But Emmy was extremely practical, and since there would
be little chance to raise a family onboard a ship, she persuaded Koch to
enter private practice. They eventually settled in the small German town of
Wollstein. Emmy recognized that Robert, who had graduated from medical
school with highest distinction, felt the need of challenges beyond what a
small-town medical practice would provide. Germany had become a leader in
many industrial areas, including the production of precision scientific equip-
ment. For Robert’s twenty-eighth birthday, Emmy gave him a microscope.
Although nearly two centuries had passed since Anton von Leeuwen-
hoek discovered bacteria, medicine had made little progress. True, no repu-
table scientist believed that diseases were due to evil spirits, as was the case
in Leeuwenhoek’s day, but a fierce battle raged over two opposing points of
view: the “miasmatic” theory, which held that disease arose spontaneously in
individuals because of conditions in the external world, and the “parasitic”
theory, in which disease was believed to be caused by microorganisms.
Perhaps because he was a country doctor, the disease that first com-
manded Koch’s attention was anthrax, a disease contracted primarily by
cattle and sheep. Livestock contracting anthrax died quickly and horribly;
their blood turned a ghastly black. Koch examined this blood under a
microscope, and observed that it contained stick-like organisms that often
collected into long thread-like configurations. Others had conjectured that
these organisms were the cause of anthrax, but it was Koch, in a brilliant
series of experiments, who proved it.
Koch observed that these organisms were never found in healthy ani-
mals, but were always found in animals stricken with anthrax. He took blood
from animals with anthrax and injected it into healthy mice. Within days,
the mice contracted anthrax, and their blood contained the stick-like bacilli.
166 T H E M I L E S T O N E S O F S C I E N C E
Treating Disease
Curing disease has been one of the enduring concerns of mankind. All of us
have fallen victim to disease at one time or another, and many apparently
disparate cultures have discovered similarly effective ways of mitigating its
effects. The benefits of chicken soup are extolled on every continent. While
chicken soup may help assuage the miseries of the common cold, more seri-
ous diseases require more aggressive treatment.
D isease 167
who took over his job felt that chickens did not deserve to be fed specially
treated rice, and so had resumed feeding them unpolished rice. Eijkman
experimented along these lines, and discovered that diet was the difference—
chickens fed on polished rice would develop the disease, but the disease could
be cured by switching to unpolished rice.
The diseases for which Lind and Eijkman developed cures were defi-
ciency diseases. Lind was the first doctor to develop a cure for a specific
disease, and Eijkman was the first to discover that the absence of specific
dietary ingredient was responsible for a disease. We are reminded daily of
their contributions to our health, for the ingredient responsible for curing
scurvy was vitamin C, and for curing beriberi, vitamin B.
A typical multivitamin tablet now includes almost half the alphabet.
Initially these compounds were called “accessory food factors.” However,
when the chemist Casimir Funk investigated the compound responsible for
curing beriberi, he discovered that it was an amine compound. He jumped to
the conclusion that all of the accessory food factors vital to diet were amine
compounds, and suggested the name “vitamine” to describe them. It was
later discovered that not all such compounds were amines, and the terminal
“e” was dropped.
ANESTHESIA
Of all the sciences, the one that is generally the most appreciated is medicine,
because it most intimately touches our lives. There are very few develop-
ments in science that measurably affect the average human life span, but the
discovery of anesthesia unquestionably belongs to this category. Without
it, many of the operations that almost everyone will undergo at some point
would be impossible.
Throughout history, man has sought relief from the tyranny of pain.
Until quite recently, this relief could only be found either by the ingestion
of large quantities of alcohol or through the consumption of substances such
as morphine. While these achieve the effect of diminishing pain, they do it
imperfectly and with many side effects. As a result, they are generally unsuit-
able for medical procedures, although many a tooth has been pulled or bullet
removed after the consumption of significant amounts of alcohol.
The first great step toward the development of anesthesia was the dis-
covery in 1800 of nitrous oxide by Sir Humphry Davy, the famous British
chemist. Nitrous oxide exists as a gas, and its first “use” was at parties in
nineteenth-century England known as “frolics.” During these social occa-
D isease 169
sions, Davy’s gas was used to befuddle the participants. The gas was not
only intoxicating, it made people hilarious—and soon came to be known as
“laughing gas.” It was noticed that people under the influence of “laughing
gas” were temporarily insensitive to pain.
The colonists in the United States were also doing some experimentation
along the same lines, although using different chemicals. The compound of
choice for the Americans was ether, one of the oldest manufactured organic
chemicals. Ether was undoubtedly developed by alchemists in the Middle
Ages, who heated ethyl alcohol and sulfuric acid. In 1841, Charles Jackson
of Plymouth, Massachusetts, discovered that ether had an anesthetic effect,
although he made no immediate use of it.
Meanwhile, down in Georgia, the equivalent of the English “frolics”
were taking place, using ether instead of nitrous oxide. During one of these
parties, Crawford Long, a surgeon, realized the potential value of ether in
medical procedures, and in 1842 removed a tumor from a patient’s neck
after first anesthetizing the patient. Although he used the procedure several
times over the next few years, he did not publish his results until 1849.
In the meantime, Jackson had become acquainted with William Mor-
ton, a dentist who became interested in Jackson’s observations concerning
ether. In 1846, in consultation with Jackson, Morton administered ether to
a dental patient and then removed a tooth. He also removed a tumor from
another patient’s neck, and published the results. Jackson and Morton ap-
plied for a patent on ether, but spent much of the rest of their lives quarrel-
ling over the credit for discovering anesthesia. Crawford Long later felt that
he should be similarly recognized, and this undoubtedly motivated his 1849
publication of his 1842 operations.
Alcohol, nitrous oxide, and ether may have been the first “recreational”
drugs, but the relationship between such drugs and useful medical com-
pounds continues to this day. Opium led to the development of such useful
opiates as morphine, and the novocaine that is often administered in a den-
tist’s office is very closely related to cocaine. Even cannabis is used in some
instances as a treatment for glaucoma.
One of the most exciting feelings a scientist can experience occurs when
he or she gets an idea that no one has had before. Early in his career, Paul
Ehrlich, who was an inveterate reader, had seen an article about lead poison-
ing in dogs, in which the author had determined that different amounts of
170 T H E M I L E S T O N E S O F S C I E N C E
lead accumulated in the tissues of different organs. This explained why lead
had a more toxic effect on some organs than others.
This led Ehrlich to develop the revolutionary idea of chemotherapy: that
a disease could be treated by finding a substance that was toxic to specific
disease-causing organisms. In order to begin work, Ehrlich made a hypo-
thetical connection between ideas he had come across in two other articles.
The first article stated that trypanosomes, which were responsible for diseases
such as African sleeping sickness, and spirochetes, which were responsible
for syphilis, were closely related. The second article showed that atoxyl, an
organic arsenic compound, had a toxic effect on trypanosomes.
As a result, Ehrlich formulated the hypothesis that an arsenic compound
could be developed to treat syphilis. This idea, so sensible in retrospect, was
regarded within the medical research community as laughable at best and
dangerous at worst. Ehrlich, however, was undeterred. By 1905, when he be-
gan his search for a “magic bullet” to kill the spirochetes that caused syphilis,
the synthetic chemical industry had advanced to a point where it was possible
to generate numerous variations on the atoxyl theme.
One can only wonder what would have happened if Ehrlich had been
employed, not by a German university at the turn of the twentieth century,
but by a modern pharmaceutical company with its emphasis on the bottom
line. For after one year of product development, no product had been de-
veloped. Nor after two years, nor three. More than four years and over six
hundred unsuccessful trials took place before Ehrlich’s Compound Six-Oh-
Six (later renamed Salvarsan), the 606th atoxyl variant to be tested, proved
successful.
Salvarsan proved to be successful beyond even Ehrlich’s expectations. In
many cases, it cured syphilis overnight. Ehrlich, whose modesty was legend-
ary, was once congratulated on his achievement by a colleague. He replied
that “for seven years of misfortune I had one moment of good luck.”
Ehrlich was perhaps even luckier with Six-Oh-Six than he knew. It was
later discovered that the cultures of syphilis continue to thrive almost nor-
mally on culture plates in laboratory experiments when exposed to concen-
trations of Salvarsan much greater than could possibly be given to patients.
From other evidence, we now also know that the complex arsenic compound
Ehrlich invented does not actually have the “magic bullet” effect for which
he searched. It breaks down in the body to a simpler substance that actually
produces the cure. If these experiments on culture plates had been made by
Ehrlich, the first magic bullet to combat bacterial infections might never
have been found. Perhaps fortune really does favor the bold, even in science.
D isease 171
always present in dusty laboratories had gotten on to the bacteria dish and
“spoiled” it. He was just about to throw out the contaminated dish, when he
suddenly changed his mind. He examined the dish more closely and noticed
that a gaping hole appeared in the middle of the culture. Fleming analyzed
it and discovered that it was due to an accidental contamination of the dish
by a fungus. This mold was later identified as penicillium notatum, whose
active ingredient was penicillin. Fleming, calm and even-tempered as always,
duly recorded this observation. Despite the chilly reception he had endured
from the London Medical Research Club eight years previously, he felt his
discovery was sufficiently important to report it to them.
Once again, the enormous promise of Fleming’s discovery was obscured
by his inability to convince his audience of the importance of his discovery.
It required the stimulus of World War II, with its urgent need for antibiotics,
to recognize Fleming’s discovery for what it was: the single most powerful
weapon ever developed in the battle against microbial infection, a drug that
would save tens of millions of lives.
The value of penicillin was recognized by the beginning of World War
II, but there were major obstacles to mass-producing it. The British phar-
maceutical industry did not have the time or the resources to pursue the
problem, so the world’s entire supply of penicillin, which consisted of a few
test tubes derived from Fleming’s original discovery, was packed up and
shipped in secret to the United States. It was believed to be perhaps the most
valuable substance on the planet, and every single milligram was jealously
hoarded. When a laboratory technician at Merck, one of the drug companies
investigating ways to mass-produce it, requested a larger sample of penicil-
lin for an experiment, Dr. Max Tishler, the laboratory director, responded,
“Remember, when you are working with those 50 or 100 milligrams, you are
working with a human life.”
Preventing Disease
As of this writing, there is no cure for COVID-19—although there are
developments that show promise. But there is something even better than
being able to cure a disease, and that is being able to prevent a disease—or
to minimize its effects should it occur.
If the germ for a disease exists, there are people whose immune system
will protect them from contracting the disease, but the immune systems of
many people do not recognize the germ as hostile and thus do not mobilize
the body’s defenses to combat that disease. But one of the great discoveries
D isease 173
in the history of science is that there is a way to get the body’s immune sys-
tem to recognize the germ without incurring the cost of first contracting the
disease—and that’s what vaccines do.
A fierce debate has raged over the years on whether or not to execute one
of the greatest mass murderers of all time. This mass murderer remains
under constant guard in the Centers for Disease Control and Prevention
in Atlanta, Georgia, as well as in a similar laboratory in Russia. The debate
centers around the last remaining specimens on Earth of the smallpox virus,
and the question is whether it would do more good to have it available for
study, or whether it should be destroyed to prevent it from ever again afflict-
ing mankind.
The last recorded case of smallpox was in 1977, and nowadays everyone
is routinely vaccinated against it. Since almost no one has even seen a case
of smallpox, we must rely on descriptions such as the following by the noted
author Thomas Macaulay:
That disease . . . was the most terrible of all the ministers of death
. . . the smallpox was always present, filling the churchyard with
corpses, tormenting with constant fears all whom it had not yet
stricken, leaving on those whose lives it spared the hideous traces
of its power.
Toward the end of the seventeenth century, the Turks had noticed that those
who survived an attack of smallpox developed immunity to the disease. They
developed a technique of smallpox inoculation, in which a person was given a
(hopefully) mild case of smallpox. Unfortunately, this technique, called vari-
olation, was highly erratic. If the individual being inoculated contracted too
severe a case, he or she could easily be scarred for life, blinded, or even killed.
Dr. Edward Jenner, an English country physician, was familiar with the
technique of smallpox inoculation. While inoculating villagers one day, he
was told not to bother inoculating one of the dairymaids. When Jenner asked
why, the villagers explained to him that she had previously contracted a case
of cowpox. While it was known that people who had contracted cowpox
never got smallpox, it was Jenner who formulated the critical hypothesis: if
one were to give people the mild disease of cowpox via inoculation, rather
174 T H E M I L E S T O N E S O F S C I E N C E
than the dangerous disease of smallpox, one could achieve the immunity
against smallpox without risk.
Jenner took more than twenty years of experimentation to establish the
truth of this hypothesis. In the process, he coined the term “vaccine” (from
vacca, the Latin word for cow!). Nonetheless, when he presented his findings
to the Royal Society, he admitted that his investigations were incomplete,
stating that he hoped his results would “present to persons well situated for
such discussions, objects for a minute investigation. In the meantime, I shall
myself continue to prosecute this inquiry, encouraged by the hope of its
becoming essentially beneficial to mankind.”
It would be more than a century before the mechanism by which small-
pox vaccination worked would be understood. The immune system stores a
record of previous invasions by foreign bodies, so that future onslaughts may
be quickly repelled. Once the immune system has encountered the cowpox
virus, it is able to recognize and attack the highly similar smallpox virus
before the latter has time to multiply and overwhelm the body’s defenses.
Encouraging the immune system to destroy an invading virus is the basis of
all vaccines, including the vaccines recently developed against COVID-19.
Many of the great advances in science are the result of acute observation.
Good observers are doubtless made, not born, but there are probably certain
backgrounds that prepare one well for patient observation. Jenner found an
unusual one, as prior to becoming a doctor, he had been an ornithologist.
His chief claim to fame had been the observation that cuckoo chicks, which
hatch from eggs laid in other birds’ nests, physically eject the natural chicks
from their own nest, and are then “adopted” by the birds who raise the
cuckoo chick as their own.
ANTISEPSIS
terested in childbed fever. He was startled by the fact that women who were
examined by a physician prior to childbirth were much more likely to die of
infection than those who were not examined. When a colleague died of an
infection after being cut by a surgical knife, Semmelweis reached the conclu-
sion that lethal infections were being caused by unsterile instruments and
physicians’ hands. Against fierce opposition, he required doctors in his de-
partment to wash their hands prior to performing surgery. This resulted in a
dramatic reduction in the number of women who died from post-childbirth
infection. Despite these clear lifesaving benefits, Semmelweis was fired from
the hospital, and was persecuted to such an extent that he suffered a nervous
breakdown. In a sadly ironic twist, Semmelweis suffered the same fate as his
colleague. While attending a sick patient, he accidentally wounded himself
and died of childbed fever.
Several years later a Scottish physician, Joseph Lister, became convinced
of the validity of Louis Pasteur’s theory that microbes caused infections.
The introduction of anesthetic procedures had enabled doctors to perform
more complex surgeries, but the gains made thereby were offset by a stun-
ning increase in gangrene and similar infections. Building on Pasteur’s ideas,
he decided to institute measures designed to kill any germs that might exist
in surgical wounds by treating them with carbolic acid. Although this had
the unpleasant effect of irritating the treated tissues, it greatly reduced post-
surgical infections. The reaction to Lister’s efforts by his surgical colleagues
paralleled the experiences of Semmelweis. The surgeons in Lister’s hospital
rejected his methods and refused to adopt his procedures. He was accused
of stealing the ideas of others, and had to struggle for more than a decade
before his antiseptic procedures finally gained credence. Although he did his
initial work in Scotland, he later moved to London in an attempt to convince
surgeons there of the validity of his techniques. His ideas met with stiff op-
position in England, but they were quickly adopted on the continent.
Lister was not only an excellent surgeon, he was a respected scientist
who stayed on the cutting edge (appropriate for a surgeon) of developments
in his field. He was one of the first scientists to accept Pasteur’s and Koch’s
theories, and did pioneering work in bacteriology.
Even though Semmelweis and Lister were basically attacking the same
problem, Lister had the enormous advantage of timing. The sterilization
techniques introduced by Semmelweis were successful, but at the time there
was no apparent reason why they should have been. Lister’s work occurred at
the time that both Pasteur and Koch were achieving great recognition, and in
the light of these developments it was easy to understand why Lister’s tech-
niques proved so effective. It would be nice to think that the scientific world
176 T H E M I L E S T O N E S O F S C I E N C E
is one in which rationality always prevails, but the experiences of Lister dem-
onstrate that just because what you do works, and there is a valid reason that
it works, there is no guarantee that your ideas will be immediately accepted.
The admonition to “wash your hands thoroughly” came once again to
the fore during the COVID-19 pandemic. This antiseptic procedure helps
prevent both bacterial and viral infections by removing or killing the germs
responsible. Germs generally enter the body through openings, such as eye,
nose, and mouth, that we touch frequently. Even though it has been shown
that COVID-19 is primarily spread through respiratory droplets, it’s still a
good idea to substitute the fist or elbow bump for the common handshake
during the pandemic.
As of this writing, the battle against COVID has been going on for several
years, and progress has been remarkable. Such was not the case in 1949,
when the dreaded disease was not COVID-19 but polio. Like COVID, polio
was caused by a virus. Unlike COVID, whose victims tend to be the elderly,
polio primarily attacked children, and was also known as infantile paralysis.
It was not known precisely which behavior was most likely to result in a per-
son contracting polio, but the disease altered behavior much as COVID has.
During the summer, parents forbade their children from swimming in public
pools, and people slept with their windows closed to prevent the virus from
entering. Leading experts agreed that a cure or a vaccine was decades away.
The experts were wrong. On April 12, 1955, ten years to the day after
the death of President Franklin D. Roosevelt, a well-known polio victim,
it was announced that a vaccine developed by a young medical investiga-
tor named Jonas Salk, from the University of Pittsburgh, had been proven
safe and effective in preventing polio. The efficacy of the Salk vaccine had
been confirmed in one of the largest field trials in medical history. Had Salk
discovered that chicken soup prevented polio, the news would have been no
less welcome, and the road to eventual success would have been substantially
easier, for no expert’s reputation depended upon the denial of the curative
power of chicken soup. Salk, however, decided to try to create a “killed vi-
rus” vaccine. At the time, the prevailing view of vaccine immunity was that
it was necessary to use “live virus” vaccines, of the type developed by the
nineteenth-century giants Louis Pasteur and Robert Koch. In trying to cre-
ate a “killed virus” vaccine, Salk was challenging the wisdom of the medical
D isease 177
Science in the
Twenty-First Century
We are only two decades into the twenty-first century, and already we have
seen some amazing discoveries. Gravitational waves, predicted by Einstein’s
theory of relativity, have not only been discovered but have shed light on
what happens when collisions occur between massive objects such as neutron
stars. The discovery of the Higgs boson, which gives mass to the various par-
ticles, was finally confirmed after a half century of searching. And, of course,
the rapidity with which the mRNA vaccines were created and produced
have helped to mitigate the COVID-19 pandemic, and hold the promise of
revolutionizing vaccine creation.
As amazing as these discoveries are, the promise of even more lies ahead.
The Webb telescope may show us the signature of life elsewhere in the Uni-
verse. We may finally discover the nature of the dark matter and dark energy
that appear to comprise most of the matter and energy in the Universe. We
may discover how life evolved on Earth, and there is a good chance that this
century will see the creation of life-forms from scratch, rather than by modi-
fying existing life-forms.
Already we can see three developing trends in science. What has changed
in the twenty-first century is how science is done, and who’s doing it. Sadly,
what has not changed are the negative and erroneous views with which sci-
ence is sometimes regarded.
When we look at the milestones of science, the vast majority to date are the
result of the work of individuals. For most of the seventeenth and eighteenth
179
180 T H E M I L E S T O N E S O F S C I E N C E
via telephone. Now their computers enable them to see and actually work
with each other in real time. It’s the Golden Age of scientific communica-
tion and collaboration.
Carl Sagan, an internationally respected scientist, was one of the great popu-
larizers of science in the twentieth century. His book Cosmos was made into
an extremely popular television miniseries, and Sagan himself was a staple
on television talk shows. Another of his books, The Demon-Haunted World:
Science as a Candle in the Dark, is a stark description of the dangers presented
by the antipathy to scientific thought that has existed throughout history.
Sagan would not have been at all surprised to learn that fully 20 percent
of Americans believe that the government is injecting tracking microchips via
the COVID vaccines. Here we are, in the midst of the first global pandemic
in a century. The population has been completely vaccinated against small-
pox and polio—and also receives periodic vaccination against tuberculosis,
shingles, and flu. Nonetheless, certain parties have found it of greater value
to spread lies about the mRNA vaccines and the motives behind those urging
vaccination than to implore their followers to take advantage of one of the
great achievements of medical science.
Science has faced many uphill battles in the past. In the Middle Ages,
scientists such as Giordano Bruno were burned at the stake for promulgating
the heretical notion that the Earth was not the center of the Universe. Such
a point of view brought science into confrontation with the Roman Catholic
Church, the most powerful organization of the times. Most of the battles sci-
ence now faces are fought on the field of the biological sciences. It’s hard to
imagine a development in physics or chemistry that would engender societal
conflict—and it is hard to believe that the development of the mRNA vac-
cines to ease a global pandemic would have done so. But it did, and unless
something is done in the way of counteracting the inaccurate information
that proliferates via social media, things will probably get worse.
Science is one of the great benefactors of humanity, but there is no ques-
tion that science opens Pandora’s box. Almost certainly this century will see
the creation of life from test tube ingredients—and given the array of DNA
analysis and engineering tools currently in existence, we will see the opening
of a Pandora’s box unlike anything we have ever witnessed. Let us hope that
we have the wisdom to accompany the knowledge that science has accumu-
lated and will accumulate.
Bibliography
183
184 T H E M I L E S T O N E S O F S C I E N C E
Oberg, Erik. Machinery’s Handbook. 29th edition. New York: Industrial Press, 2012.
Sagan, Carl. Cosmos. New York: Random House, 1980.
Scientists Who Changed History. London: D.K. Publishing, 2019.
Staley, Richard. Einstein’s Generation: The Origins of the Relativity Revolution. Chi-
cago: University of Chicago Press, 2008.
Sykes, Christopher (Ed.). No Ordinary Genius: The Illustrated Richard Feynman.
New York: Norton, 1995.
Wali, Kameshwar. Chandra: A Biography of S. Chandrasekhar. Chicago: University
of Chicago Press, 1990.
Watson, James. The Double Helix. New York: Scribner, 1998.
Weast, Robert (Ed.). CRC Handbook of Chemistry and Physics (1981–1982). Boca
Raton, FL: CRC Press, 1981.
Acknowledgments
There are several people that I’d like to thank who helped make this book
possible. First up are my parents, whose obvious contribution was supple-
mented by encouraging my love for science. My parents took me to muse-
ums, gave me presents such as a cherished chemistry set, and checked out
books that they felt might be of interest from the local library. Next is my
wife Linda, who returned from a trip to Taiwan just in time to realize that
what I thought was a cold might be more serious. It was pneumonia, and
without her efforts I might not be here to write this book. The third person
I would like to thank is my editor, Jake Bonar, for realizing that this might
be just the right time for a book like this, and I might be the right person to
write it. Last, I am grateful to Al Posamentier, a fellow mathematics professor
whose connections in the publishing world fortunately included Jake, and
who brought the two of us together.
185
Index
187
188 I N D E X