CNAD 2025 Social Science V1
CNAD 2025 Social Science V1
SCIENCE
THE HISTORY OF COMPUTERS
2024-2025
Table
of Contents
Introduction _____________________________________________________________________________ 3
Section I Early Information Processing _______________________________________________________ 4
1.1 Early Information Processing in Great Britain _____________________________________________ 5
1.2 Early Information Processing in the United States _________________________________________ 10
1.3 Early Information Processing in Germany and the Work of Konrad Zuse _______________________ 17
1.4 Early Information Processing in China __________________________________________________ 18
Section I SUMMARY _____________________________________________________________________ 20
Section II General-Purpose Electronic Computers _____________________________________________ 23
2.1 THE ENIAC (Electronic Numerical Integrator and Computer) ________________________________ 24
2.2 Progress in England________________________________________________________________ 27
2.3 The Completion of the EDVAC _______________________________________________________ 28
2.4 The Eckert-Mauchly Computer Corporation (EMCC) ______________________________________ 29
2.5 The Growth of IBM _________________________________________________________________ 32
2.6 Other Players in the Mid-Twentieth-Century Computer Industry ______________________________ 32
2.5 The Growth of IBM _________________________________________________________________ 32
2.7 Advances in Hardware ______________________________________________________________ 32
2.6 Other Players in the Mid-Twentieth-Century Computer Industry ______________________________ 32
2.5 The Growth of IBM _________________________________________________________________ 32
2.8 Software _________________________________________________________________________ 38
2.9 IBM SYSTEM/360 _________________________________________________________________ 43
2.10 The ENIAC Patent Case ___________________________________________________________ 47
2.11 Progress in China ________________________________________________________________ 48
Section II SUMMARY _____________________________________________________________________ 50
Section III Toward “Personal” Computing ___________________________________________________ 54
3.1 Project Whirlwind __________________________________________________________________ 55
3.2 SAGE ___________________________________________________________________________ 55
3.3 SABRE __________________________________________________________________________ 56
3.4 Timesharing ______________________________________________________________________ 56
3.5 Dec and the Rise of Minicomputers ____________________________________________________ 57
3.6 Networking _______________________________________________________________________ 60
3.7 XEROX PARC ____________________________________________________________________ 65
3.8 The Microprocessor ________________________________________________________________ 69
3.9 Personal Computers _______________________________________________________________ 70
3.10 Video Games ____________________________________________________________________ 73
3.11 VISICALC_______________________________________________________________________ 74
3.12 The IBM PC _____________________________________________________________________ 75
3.13 The Apple Macintosh ______________________________________________________________ 76
3.14 PC CLONES ____________________________________________________________________ 78
3.15 The Graphical User Interface goes Mainstream _________________________________________ 78
3.16 Mainstream _____________________________________________________________________ 78
Section III SUMMARY ____________________________________________________________________ 80
Section IV The Internet, Social Media, and Mobile Computing ___________________________________ 84
4.1 The GNU Project and the Open-source Movement ________________________________________ 85
4.2 HYPERTEXT _____________________________________________________________________ 88
4.3 Browser Wars ____________________________________________________________________ 91
4.4 Search Engines ___________________________________________________________________ 94
4.5 The Dot-com Bubble _______________________________________________________________ 95
4.6 JAVA ___________________________________________________________________________ 96
4.7 NeXT ___________________________________________________________________________ 97
4.8 The iMAC ________________________________________________________________________ 97
4.9 Microsoft’s Gradual Decline ________________________________________________________ 98
4.10 Mobile Computing ________________________________________________________________ 99
4.11 Smartphones ___________________________________________________________________ 101
1
4.12 Web 2 .0_______________________________________________________________________ 103
4.13 Tablets ________________________________________________________________________ 105
4.15 Cloud Computing ________________________________________________________________ 106
4.16 Blockchain _____________________________________________________________________ 107
4.16 Artificial Intelligence ______________________________________________________________ 108
4.17 Quantum Computing _____________________________________________________________ 109
Section IV SUMMARY ___________________________________________________________________ 110
Conclusion ____________________________________________________________________________ 113
Notes ________________________________________________________________________________ 114
2
00
Introduction
Our use of the word “computer” to describe an electronic machine that stores and
retrieves information and runs programs is only a few decades old. Prior to World War
II, the word “computer” meant a person who performed mathematical calculations by
hand, using pencil and paper. How did this word come to change in its meaning so
dramatically over time?
The first electronic computers were invented to ease the work of human computers.
During World War II, the U.S. military was working on new weapons that required a great
deal of calculations—more than even their large teams of highly skilled human
computers could ever hope to complete on time. The first electronic computer—in the
modern sense of the word—was the ENIAC, constructed at the University of
Pennsylvania and unveiled in 1946. It was a giant machine that weighed thirty tons and
filled an entire room.
The ENIAC could do calculations in minutes, which required days of effort for
human computers, and before long, the meaning of the word “computer” was changed
to refer to machines rather than people. Over the next few decades, technological
improvements would result in computers becoming simultaneously smaller and faster:
from mainframes that filled rooms to minicomputers the size of a refrigerator to personal
computers that sat on a desk or a lap. And the miniaturization continues unabated to
this day.
Although we don’t generally call them “computers,” our smartphones and tablets
are the direct descendants of these twentieth-century innovations. Today’s handheld
devices are running software first created in the 1970s! Yet even the first electronic
computers of the 1940s can trace their origins back to the Industrial Revolution in
Europe and the United States. With the pace of production increasing, the need for more
efficient mechanisms for information processing grew as well.
As we will soon see, the changes were incremental, beginning with better ways of
organizing and moving paper around. Eventually, mechanical typewriters and adding
machines began to replace some of the more tedious office labor. These innovations
paved the way for electromechanical and over time, fully electronic business machines.
So, how did these tools for office automation eventually lead to today’s laptops and
smartphones? Come and see. Strap in, and let’s take a ride along the sometimes bumpy
road of the history of computing!
NOTE TO STUDENTS: You will notice that some terms throughout the resource
guide have been boldfaced. These terms are included in the glossary at the end of each
section.
3
01
Section I
Early Information Processing
An engraving of 4Adam Smith by John Kay, 1790. Smith discussed how worker
specialization increased efficiency in his famous book The Wealth of Nations.
1.1 Early Information Processing
in Great Britain
It is said that necessity is the mother of invention. This seems to have been true
with regard to the invention of the modern computer. Prior to the eighteenth century,
Great Britain was primarily an agricultural society with little need for automation.
However, with the advent of the Industrial Revolution and increased population growth,
workers out of necessity had to become more efficient and specialized. Adam Smith
captures this idea in his 1776 book The Wealth of Nations, the most well-known book
on economics of the era. In this book, Smith gave an example of manufacturing pins. A
single master craftsman can produce pins working alone, but progress would be slow.
However, a team of less-skilled workers could produce more pins more quickly by
specialization. One worker could cut wire into short lengths; one could straighten them;
another could sharpen the ends. Thus, by dividing the manufacturing process into a
series of small repetitive steps, more could be produced with less skilled—and therefore
less expensive—labor.
5
Babbage saw a similar need in his own government. At the time, Great Britain had
the largest navy in the world, and sea captains relied on books of navigational tables to
navigate the world’s waterways. A navigational table shows the positions of the moon,
sun, and stars on any given day. Babbage believed that the creation of these
navigational tables could be automated, just as de Prony had done with his logarithmic
tables. However, Babbage wanted to take this idea one step further. Instead of using
human labor to perform the calculations, Babbage proposed using a machine—with
gears and motors—to do the math. Moreover, he envisioned that his machine would
include a type-setting mechanism to prepare the numbers for printing, thus avoiding
human error in transcribing the results. Babbage believed his machine could produce
more accurate calculations faster than human labor.
Babbage wrote a proposal to the British government, requesting funds to build his
machine, which he called the Difference Engine. The government agreed to fund his
project. In 1833, Babbage produced a small-scale prototype of the Difference Engine.
It was not fully functional and could not print out its results, but it was an impressive
proof of concept.
6
Babbage’s Analytical Engine
By this time, Babbage’s inventive mind was already thinking about the Difference
Engine’s successor, which he called the Analytical Engine. Whereas the Difference
Engine was specialized to perform just one kind of calculation, Babbage envisioned the
Analytical Engine as a general-purpose calculating machine that could be programmed
to perform any mathematical operation. However, Babbage did not get more funding
and never completed either of his machines.
Necessity inspired railways to become efficient and take steps toward automation.
To prevent collisions of trains traveling in opposite directions on the same track, the
British railway companies devised a way to let railroad operators in different stations
communicate with each other. In the 1860s, the various railway companies ran electric
cables along their railroad tracks. It did not take long for the railway companies to realize
that these cables could also be used for another purpose: sending and receiving
telegrams. The telegram was an early forerunner to today’s email, fax, or text messaging
systems. A person wishing to send a telegram would visit an office with a telegraphy
machine and dictate a message. The telegraph operator would transmit the message,
using electric pulses, to the telegraph office in a neighboring city. The message would
arrive at the other office within seconds. The receiving operator would decode the
message, print it on paper, and dispatch a messenger to carry the paper to the recipient.
From start to finish, telegram messages could be sent and received within just a
few hours. Railroad companies charged by the word. These telegraph lines were the
technological ancestor to modern telephone and computer networks.
8
1.1.5 Turing’s Work on Codebreaking
During World War II, Turing turned his attention to codebreaking. He and a team of
others in Bletchley Park, England, developed a series of electromechanical machines
called bombes to assist in automatically decrypting messages intercepted by the
Germans. While not qualifying as general-purpose computers, the bombes provided
Turing and his associates with valuable experience. After the war, Turing was involved
in designing an early electronic computer called the ACE.
Today, Turing is best known for his contributions to theoretical computer science;
few of Turing’s engineering ideas have made their way into the actual design of today’s
computers. Instead, modern computers trace their lineage to early calculating machines
created in the United States.
9
1.2 Early Information Processing
in the United States
America was slower to industrialize than Great Britain. It was not until after the Civil
War that large-scale manufacturing—with its associated need for information
processing—became widespread in the United States.
The earliest example of a large-scale data processing project in the United States
was the census. First performed in 1790 and every ten years since then, the census
was originally tabulated entirely with human labor.
However, after the 1880 census, which took
seven years to process, the need for a more
efficient system was clear. Robert Porter was
appointed director of the 1890 census. He accepted
a proposal by Herman Hollerith to record
demographic information on small sheets of card
stock. Hollerith was inspired by the use of sheets of
perforated paper to control the organette, a novelty
musical instrument popular at the time. With an
organette, musical notes were encoded as small
holes spaced
strategically along a long strip of paper and fed
into the instrument. As air from a hand pump
passed through the holes, the organette would play
notes based on the position and size of the holes.
In like manner, Hollerith proposed encoding
Portrait of Herman Hollerith, c. 1888. information about individuals as small holes in
paper cards. The positions of the holes in the cards
8. indicated a person’s age, gender, occupation, etc.
10
A replica of Hollerith’s tabulating machine with a sorting box, c. 1890.
By Adam Schuster - CC BY 2.0, https://commons.wikimedia.org/w/index.
php?curid=13310425
The benefit of Hollerith’s system was that once the holes were punched, the cards
could be sorted and counted automatically. To tabulate the results, a worker placed a
card into a card-reading machine designed by Hollerith. By using Hollerith’s punched-
card system, the 1890 census was tabulated in only two and a half years, compared to
seven years for the 1880 census. While they were not “computers” in the strict sense of
the word, Hollerith’s punched card machines paved the way for automated data
processing in the United States.
Although the 1890 census was surely the largest example of office automation at
the time, labor-saving devices—such as typewriters, filing cabinets, and adding
machines—were being embraced by small offices across the United States. It may seem
strange to recount the history of office equipment in a resource guide about the history
of computers. Recall, however, the tasks for which computers were introduced into
offices: word processing, information storage and retrieval, and financial analysis. These
are the same purposes for which typewriters and filing cabinets and adding machines
were designed. Indeed, the computer is the literal descendant of, and replacement for,
these early office automation devices.
11
Prototype of the Sholes and Glidden typewriter, the first
typewriter with a QWERTY keyboard (1873).
Filing Cabinets
In 1927 the Remington Typewriter Company merged with the Rand Kardex
Company; the new organization was known as Remington Rand. Prior to the merger,
Rand Kardex was a well-established producer of document storage and retrieval
systems; namely, filing cabinets and vertically hanging folders. Prior to the advent of
vertical filing systems, organizations would typically store documents in boxes or bind
them in books. This made storage and retrieval of specific documents slow and
cumbersome. Filing cabinets were not only faster and easier to use; they also took up
less space and could be expanded indefinitely. After the merger with Remington, the
new company was by far the largest producer of tools for both the creation and storage
of documents in the United States.
Adding Machines
12
Although mechanical calculating machines have been around since the 1600s, they
were expensive and never produced in large quantities. However, in 1887, Dorr E. Felt,
an inventor from Chicago, invented a mechanical calculator called the Comptometer.
Around the same time, William S. Burroughs also invented a mechanical calculator. It
was similar in operation to the Comptometer, but it could print out the results onto a roll
of paper.
These devices were known as adding machines because they were designed
primarily for addition. Other operations—subtraction, multiplication, and division—were
also possible but not as convenient. Over the next few decades, both Felt’s
Comptometer and Burroughs’ adding machines proved very successful, with thousands
of devices being sold every year.
13
could then be compared against the amount of money received at the end of the
workday.
One of Patterson’s most able salesmen, Thomas J. Watson, Sr., was rising in the
ranks of NCR when in 1911 Patterson abruptly fired him on a whim. Watson became
president of the Computing-Tabulating-Recording (C-T-R) company in 1914. C-T-R was
the descendent of Hollerith’s original punched-card machine company. Watson
immediately implemented at C-T-R the aggressive sales techniques he had learned at
NCR, and he also invested in research to improve the company’s tabulating machines.
In 1924, under Watson’s leadership, C-T-R changed its name to International Business
Machines or IBM. Over the next ten years, IBM came to dominate the office-machine
industry.
Over the next few decades, as electricity became more commonplace in the United
States, electric office equipment came to supplant its mechanical counterparts. For
example, an electric typewriter used a motor, rather than the force of the user’s keypress,
to strike the typebar against the paper. Similarly, an electric adding machine used
electricity rather than manual power to move its internal gears. However, while the
addition of electricity made these items easier to use, it did not fundamentally change
their mode of operation.
14
IBM President Thomas J. Watson, Sr., c. 1920.
By IBM, CC BY-SA 3.0, https://commons.wikimedia.org/w/index. php?curid=11940847
As we will see later in this resource guide, electricity would play an even greater
role in the design of computing machines only a few decades later. Companies like IBM,
Remington Rand, Burroughs, and NCR would all be key players in the mid-twentieth-
century computer industry.
15
A 1997 replica of the Atanasoff–Berry Computer at the Durham Center, Iowa State
University. By User:Manop - CC BY-SA 3.0, https://commons.wikimedia.org/w/index.
php?curid=887137
We close our discussion of early computing in the United States with one more
machine. About the same time that Howard Aiken and IBM were working on the Mark I,
a professor at Iowa State University named John Atanasoff and a graduate student
named Clifford Berry were working on a computer of their own. Known today as the
Atanasoff-Berry Computer (ABC), it was created specifically to solve systems of linear
equations, rather than general-purpose calculations. In contrast to the Mark I, the ABC
was created with a very small budget and little fanfare.
After 1942, its creators abandoned the ABC and moved onto other projects. In fact,
the ABC was virtually unknown for decades after its creation.
As impressive as the Differential Analyzer, the Mark I, and the ABC were, they
cannot be considered general purpose computing devices in the same way that modern
computers are. First of all, with the exception of the ABC, these devices were primarily
mechanical, rather than electronic. Second, these machines lacked the ability to perform
conditional branching—what programmers now refer to as “if/then” statements—the
ability to make a decision and then perform different tasks based on that decision.
Instead, they simply chugged through a sequence of instructions from start to finish. The
impetus to create the first electronic, general-purpose programmable computers would
not come until World War II, as we shall see later on.
16
1. 3 Early Information Processing in
Germany and the Work of Konrad Zuse
17
As it turned out, however, Zuse had little influence on the design of later computers.
By the time his work became known to the rest of the world, the basic design for
electronic computers had already been set, and it was based on the work of American
pioneers such as J. Presper Eckert and John Mauchly.
China is an ancient country with a long history and civilization. It has been at the
forefront of the world in terms of calculation methods and tools. The decimal counting
method invented in the Shang Dynasty was more than 1000 years ahead of the world;
The counting rods were invented in the Zhou Dynasty; China is one of the early countries
to invent the abacus, using a mnemonic equivalent to "software" as control instructions
to operate the abacus beads for calculation.
The Chinese adopted a decimal (base-ten) numeral system as early as the
fourteenth century BC, during the Shang dynasty (c.1600 – 1046 BC). This is evidenced
by the earliest literary inscriptions, called jiaguwen, “oracle bone script”), which were
etched on durable media such as tortoise plastrons and the shoulder bones of cattle.
Many of the Jiaguwen documents recorded special events, oracle queries, as well as
astronomical observations. Fortunately, the Shang scribes left behind records that
included numerals.
Jiaguwen numerals from the Shang dynasty (c. 1600 - 1046 BC)
For much of Chinese history, especially from the Han to the Yuan dynasties (206
BC – 1368 AD), counting rods played an important role in the development of Chinese
mathematics. The counting rods were arranged on a flat surface such as a table or the
ground; more elaborate surfaces included counting boards that included marked square
cells to demarcate rows and columns.
18
Chinese Counting board with counting rods
Chinese abacus
The Chinese abacus, or suan pan ('computation tray'), is a calculation device that
consists of a rectangular frame lined with eight or more bamboo rods and sliding beads
suspended on each rod. A wooden divider separates the abacus into two tiers. Its
characteristic design, with two beads on each rod in the top tier and five beads on each
rod in the bottom tier, has become a de-facto icon of all abacuses.
19
Section I Summary
20
Section I Glossary
1. Industrial Revolution—was a period of global transition of the human economy towards more
widespread, efficient, and stable manufacturing processes that succeeded the Agricultural Revolution.
Beginning in Great Britain, the Industrial Revolution spread to continental Europe and the United
States during the period from around 1760 to about 1820–1840
2. gear—the machinery in a vehicle, such as a car, truck, or bicycle, that you use to go comfortably at
different speeds
3. prototype —the first form that a new design of a car, machine, etc., has, or a model of it used to test the
design before it is produced
4. algorithm —a precise, step-by-step list of instructions for accomplishing a given task; algorithms are
key to computer science: a program is simply an algorithm expressed in computer code
5. Entscheidungsproblem—In mathematics and computer science, the
Entscheidungsproblem is a challenge posed by David Hilbert and Wilhelm Ackermann in 1928. The
problem asks for an algorithm that considers, as input, a statement and answers "yes" or "no" according
to whether the statement is universally valid, i.e., valid in every structure
6. equivalent—having the same value, purpose, job, etc., as a person or thing of a different kind
7. Turing Machine—a theoretical model of computing described by English mathematician Alan Turing
in 1936
8. decrypt—to change a message or information on a computer back into a form that can be read when
someone has sent it to you in a type of computer.
9. census—an official process of counting a country’s population
and finding out about the people
10. tabulate—to arrange figures or information together in a set or a list so that they can be
easily compared
11. perforated paper— a craft material of lightweight card with regularly spaced holes in imitation
of embroidery canvas. It is also sometimes referred to as punched paper
12. adding machine —an early form of mechanical calculator, mass-produced in the early twentieth
century and sold as office equipment
13. longhand—if you write something in longhand, you write it by hand using complete words rather
than typing it or using special short
forms of words
14. supplant—to take the place of a person or thing so that they are no longer used, no longer in a position
of power, etc.
15. Differential Analyzer— an electromechanical computer designed by Vannevar Bush; several Differential
Analyzers were constructed; one was used by the human computers at Aberdeen Proving Ground to
calculate ballistic firing tables prior to the construction of the ENIAC
16. contraption— a piece of equipment or machinery that looks funny,
strange and unlikely to work well
17. shaft—a long , thin piece of metal in an engine or machine that turns and passes on power
or movement to another part of the machine
21
18. decimal—a fraction (=a number less than 1) that is shown as a full stop followed by the number
of tenths, hundredths, etc. The numbers 0.5,0.175, and 0.661 are decimals
19. arithmetic—the science of numbers involving adding, multiplying,
etc.
20. binary—relating to the system of numbers used in computers, which uses only 0 and 1
21. abacus—a frame with small balls that can be slid along on thick
wires, used for counting and calculating
22. mnemonic—something such as a poem or a sentence that you use to help you remember a rule, a
name, etc.
22
02
Section II
General-Purpose Electronic
Computers
23
2.1 THE ENIAC (Electronic Numerical
Integrator and Computer)
In 1943, the United States army had a problem with artillery guns. It was not that the
Army had too few guns or not enough ammunition; The problem was a lack of firing
tables for its guns. In order to fire an artillery gun correctly, a soldier has to take many
variables into account, such as the distance from the target, wind speed and direction,
humidity, elevation, and temperature. Thus, before firing an artillery gun, a soldier
needed to open up a booklet containing “firing tables.”
A firing table tells the gun operator at which angle to point the gun based on all these
variables. To make it even more complicated, each different type of artillery gun—indeed,
each different type of shell—required separate firing tables. Each table could have
hundreds of rows of numbers. These tables were created by “computers.” As we
mentioned earlier, back then, the word computer meant a person (typically a woman)
who performed calculations, either by hand or by using an adding machine.
In response, in early 1943, the army assigned Dr. Herman Goldstine to supervise the
computer team in Pennsylvania, with orders to compute the firing tables more quickly
by any means possible. Goldstine had been teaching mathematics at the University of
Michigan before being drafted into the army in 1942. He encouraged the computer team
to use the Differential Analyzer as much as possible, but it was a temperamental device
that broke down too often to be of much help. John Mauchly was a newly hired instructor
at the University of Pennsylvania. For years, he had been fascinated by the idea of
building a machine that could perform calculations using only electrical circuits, with no
moving parts. In August 1942, Mauchly submitted a proposal to the administration of the
University of Pennsylvania, requesting funding to build a high-speed electronic
24
calculator. His proposal was ignored by the university administration, who thought his
ideas too outlandish to consider. When Goldstine eventually got word that Mauchly had
an idea for constructing a fully electronic calculator, he was intrigued. He decided to
present Mauchly’s proposal to the army for funding. Mauchly was joined by J. Presper
Eckert, a graduate student at the University of Pennsylvania. Eckert was a brilliant
engineer who shared Mauchly’s vision for electronic computing.
When Goldstine, Mauchly, and Eckert presented their proposal to the directors of
Aberdeen Proving Grounds on April 9, 1943, they did not know what to expect. Mauchly
knew the only reason his proposal was getting a second chance was due to the
exigencies of the war. To their relief, the leaders granted the team money to build their
electronic computer. Mauchly and Eckert’s proposed machine became the Electronic
Numerical Integrator and Computer, the ENIAC.
Of the hundreds of human computers working on firing tables for the army, six
women were selected to work as the original programmers (called “operators” at the
time) of the ENIAC. Interestingly, despite the unfortunate stereotype of computer
programming being a predominantly male profession, the first computer programmers
in the world were women. Their work was crucial to the success of the ENIAC.
In spite of its power and speed, the ENIAC was, by its creators’ own admission, more
complex than necessary. Therefore, before they had even finished building the ENIAC,
Eckert, and Mauchly were already thinking of the design for their next computer.
Surprisingly, this computer was to be heavily influenced by someone outside the core
ENIAC team: John von Neumann.
25
John von Neumann was a famous mathematician who helped design the atomic
bomb. Neumann’s influence was especially felt in the design of the second computer,
the Electronic Discrete Variable Automatic Computer, or EDVAC.
By the fall of 1945, the ENIAC was completed. It was an enormous machine weighing
thirty tons and occupying 1,800 square feet. It consisted of forty-eight-foot-tall wooden
cabinets connected together with thick black cables. The cabinets were arranged in a
U-shaped pattern, lining the walls of the room. Ironically, World War II had ended by the
time the ENIAC was completed, so the need for which it was built to calculate firing
tables had become less urgent. In February 1946, the army and the University of
Pennsylvania jointly hosted a formal unveiling ceremony and press conference. This is
when the ENIAC finally became public knowledge. Newspapers were filled with glowing
stories about the ENIAC—the computer age had officially begun.
Not long after the public unveiling, the ENIAC was dismantled from the room where
it had been built at the University of Pennsylvania and moved to Aberdeen Proving
Grounds, where it remained in continuous operation until 1955, when it was finally
decommissioned.
In part to respond to the flood of inquiries about the ENIAC, in July and August 1946,
the University of Pennsylvania sponsored a summer school called the “Moore School
Lectures” to teach the principles of electronic computing to interested students. Many
representatives from universities, industry, and government attended these lectures.
As for von Neumann, after finishing his work with the ENIAC team, he returned to
the Institute for Advanced Study (IAS) in Princeton, New Jersey, where he started his
computer research group. Over the next few years, von Neumann’s team built a working
stored-program computer (known simply as the “IAS Computer,” completed in 1951)
and published a series of reports on computer design. These reports, like von
Neumann’s First Draft document, were widely disseminated.
Between the First Draft, the Moore School Lectures, and the IAS reports, the stored-
program concept was becoming widely known among academics. Since von Neumann
was listed as the sole author of the First Draft, many people believed that the ideas for
electronic computing originated with him. To this day, the basic design for all modern
26
computers is known as (for better or worse) the von Neumann architecture. Soon,
other organizations tried building their own EDVAC-like computers.
Design of the von Neumann architecture (1947). By Chris-martin, CC BY-SA 3.0,
https://commons.wikimedia.org/w/index. php?curid=1393357
Despite the ENIAC being invented in the United States, the first two “von Neumann
architecture” computers were built in England. Due in part to the codebreaking efforts
during the war, England already had a pool of engineers with experience building
electronic devices. Among them was Max Newman. A professor of mathematics at
Manchester University, Newman sought to construct an EDVAC-like computer. He
recruited another engineer, Frederic Williams, to assist him. The computer they built is
known today as the “Manchester Baby” because of its limited functionality. On June 21,
1948, the “Baby” computer ran its first successful program, making it the world’s first
stored-program computer. Its designers soon went on to construct a larger, more
powerful, computer, the Manchester Mark 1.
The Manchester Mark 1 was one of the world’s first stored-program computers.
Copyright The University of Manchester 1998, 1999
28
2.4 The Eckert-Mauchly Computer
Corporation (EMCC)
IBM, eager to get a foothold in the emergent computer industry, offered jobs to both
Eckert and Mauchly. Although tempted by the offer, they eventually decided to start their
own business instead. They launched the Electronic Control Company, which was
renamed the Eckert-Mauchly Computer Corporation (EMCC) one year later. Eckert and
Mauchly also transferred the rights of their ENIAC patent to the new company. They
intended for their first product to be a machine they called the UNIVAC: the Universal
Automatic Computer.
Eckert and Mauchly hoped to raise funds to develop the UNIVAC by selling advance
orders for the machine. Unfortunately, Eckert and Mauchly were astute inventors but
naive businessmen. One major mistake they made was to sell their computers with
“fixed-price” contracts rather than “cost-plus-developmental” contracts. With a fixed-
price contract, the buyer pays for a product at a predetermined price, regardless of how
much it costs the seller to develop the product. In a cost-plus-developmental contract,
the buyer pays the seller for its development expenses plus an additional pre-negotiated
amount for profit.
A fixed-price contract is fine for established industries where the costs to develop a
product are well known. A cost-plus contract is more appropriate for nascent industries
where a brand-new product is still being developed. Unfortunately for Eckert and
Mauchly, EMCC negotiated a fixed-price contract for $270,000 with the U.S. Bureau of
Standards (on behalf of the Census Bureau) for the first UNIVAC machine. By the time
it was completed, however, EMCC had spent $900,000 developing it.
During the development of the first UNIVAC, EMCC realized it was running out of
money fast. Scrambling for additional funding, Eckert and Mauchly agreed to build a
29
small computer called the BINAC (Binary Automatic Computer) for Northrup Aircraft
Company. The optimistically hoped that they could develop both the BINAC and the
UNIVAC computers simultaneously without falling too far behind schedule. In their
desperation, Eckert and Mauchly negotiated another woefully inadequate fixed-price
contract for the BINAC, with a price tag of $100,000. In the end, EMCC spent $280,000
to develop the BINAC.
The BINAC was a functioning computer when EMCC completed it in 1949; It was
taken apart before being shipped to Northrup, whose engineers then put it back together
on-site. Tragically, the BINAC did not work properly for Northrup. The exact reason is
unknown: it may have been damaged in transit, or perhaps Northrup re-assembled it
incorrectly. Both companies blamed the other. In any case, Northrup stowed away the
BINAC in a warehouse and never put it into production.
Still short on cash, in 1950, Eckert and Mauchly visited IBM, offering IBM a majority
stake in EMCC. IBM, which by this time had already started developing its own line of
computers, turned them down. Not long afterward, James Rand Jr., the president of the
typewriter company Remington Rand, met with Eckert and Mauchly. Under Rand’s
leadership, Remington Rand had acquired a number of other companies and by this
time offered a full line of office equipment—including punched-card tabulating machines
that competed with IBM’s. Eager to expand Remington Rand into the new field of
electronic computing, Rand Jr. offered to pay off EMCC’s debts and buy the company
outright. Eckert and Mauchly accepted the offer, and EMCC thus became a division of
Remington Rand rather than an independent company.
An amusing quote attributed to Mark Twain seems applicable here: “Very few things
happen at the right time, and the rest do not happen at all. The conscientious historian
will correct these defects.” The first stored-program computer should have been the
EDVAC; after all, it was the lineal successor to the original ENIAC. Instead, the
Manchester Baby and the EDSAC, built on the other side of the world, staked that claim.
Likewise, the UNIVAC should have gone down in history as America’s first commercially
produced computer. Instead, the small, inoperative BINAC has that distinction. Finally,
Eckert and Mauchly, pioneers of the computer age, should have gained fame and
prosperity from their invention rather than being saved from bankruptcy by an office
equipment company. Unfortunately, unlike Mark Twain’s “conscientious historian,” we
are obliged to tell the story as it really happened, not as we would have liked it to happen.
Finally, in March 1951, the first UNIVAC was completed. Perhaps to avoid the same
difficulties that plagued the BINAC, the Census Bureau opted to keep its UNIVAC on-
site at EMCC rather than have it shipped to its location. The UNIVAC was superior to
the ENIAC in every conceivable way: it used high-speed magnetic tape for input and
30
output, not just paper-punched cards. It also used only five thousand vacuum tubes
compared to ENIAC’s 18,000.
On election night in November 1952, Remington Rand pulled off a very successful
publicity stunt. The company persuaded the television network CBS to use a UNIVAC
computer to predict the results of the presidential election on live television. Legendary
anchorman Walter Cronkite hosted the event. Taking early results from a handful of
districts in just eight states, Remington Rand programmers loaded the data into the
UNIVAC and tabulated the results. At 8:30 pm that evening, the computer predicted 438
electoral votes for Dwight D. Eisenhower and only ninety-three for Adlai Stevenson—a
landslide victory.
This result surprised both the staff of Remington Rand and CBS because a public
opinion poll conducted just the previous day had predicted a much closer race. To avoid
embarrassment, both Remington Rand and CBS decided not to announce the
UNIVAC’s original prediction on live television and instead tweaked the program’s
parameters to produce more “realistic” results. However, as the night wore on and more
results came in, it became obvious that Eisenhower was, in fact, going to receive far
more electoral votes than Stevenson. The final outcome of the election was 442 to
eighty-nine—very close to UNIVAC’s original prediction. A spokesman for Remington
Rand went on the air and confessed their ruse. This broadcast was excellent publicity
for the computer industry in general, and for Remington Rand specifically—the company
received many more orders for UNIVAC computers after this. The name “UNIVAC” was
even becoming synonymous with the word “computer” in many Americans’ minds.
31
2.5 The Growth of IBM
Even before UNIVAC’s election night coup, IBM’s senior management realized that the future of
data processing was in computers. IBM had been discreetly developing electronic computers for a
few years; IBM took three research projects it had been developing internally and repurposed them
as commercial products: the 701, the 702, and the 650. Due primarily to the success of the 650, by
1955, IBM had surpassed UNIVAC in installations.
Although IBM and UNIVAC were the two largest computer companies, they were hardly the only
players in the industry. At the end of the 1950s, there were only eight main players in the computer
industry: IBM, Sperry Rand, Burroughs, NCR, RCA, Honeywell, General Electric, and CDC. By 1965,
IBM had 65 percent of the market share, and the other seven companies had the rest. Due to the
size of IBM relative to its competitors, journalists of that era began using the term “IBM and the Seven
Dwarfs” in reference to the computer industry.
Let us briefly pause our discussion of the various computer corporations to look at
some technological advances that made computers faster, smaller, and more reliable in
the mid-twentieth century.
Vacuum Tubes
In today’s terminology, the Central Processing Unit (CPU) is the “brains” of the
computer. It is where the instructions are executed and where the math is done. As
mentioned previously, the ENIAC and other early computers used vacuum tubes.
Invented in 1904, the vacuum tube resembles an incandescent lightbulb in appearance
and can regulate and amplify the flow of electricity in a circuit. The vacuum tube, then,
serves as a sort of on/off switch. Because it has no moving parts, it operates much faster
than the telephone relays used in Konrad Zuse’s early computers. Vacuum tubes’ ability
to turn on and off rapidly is what led Eckert and Mauchly to use them in the ENIAC—the
rapid electrical pulses were used to simulate “counting” and thus perform arithmetic.
32
Transistors
Of course, vacuum tubes have plenty of shortcomings: they require much electricity
and generate a lot of heat. Plus, they have a short lifespan. Thus, in the late 1940s,
researchers at Bell Labs (then a subsidiary of AT&T) invented the transistor.
Transistors perform the same function as vacuum tubes—regulating and amplifying
electric current—but are much smaller, more durable, and require far less power.
Photo of (from left) John Bardeen, William Shockley, and Walter Brattain at Bell Labs, the
inventors of the transistor, 1948.
Microchips
The next innovation was the integrated circuit (IC), otherwise known as the
microchip. Invented independently by Robert Noyce and Jack Kilby in the late 1950s,
integrated circuits became commercially viable by the mid-1960s. An integrated circuit
essentially combines multiple transistors together into one block of silicon. As a result,
an integrated circuit is more compact than individual transistors. During the 1960s,
transistors and microchips coexisted: some computer models used transistors, and
some used microchips. By the 1970s, though, integrated circuits had essentially
displaced transistors in all new computer models.
33
Jack Kilby (left) and Robert Noyce independently invented the microchip in the late 1950s.
2.7.2 Memory
Delay Lines
As the ENIAC team was planning their follow-up EDVAC computer, they decided to
use a better technology called a delay line. A delay line is a metal tube with a thin column
of liquid inside. An electric current is applied to one end of the tube, which causes
vibrations in the mercury. The signal is read at the other end of the tube and connected
back to the other end. By continuously reading and writing the mercury line signals, the
information can be stored as long as needed. The design for the EDVAC called for a
five-foot-long column of mercury, which could store roughly a thousand bits of
information at a time. Delay lines were used in several early computers, including the
original UNIVAC.
Compared to vacuum tubes, delay line memory uses vastly fewer electronics.
However, delay lines have their disadvantages, too, not the least of which is safety.
Mercury is toxic, which made delay lines risky to manufacture and service. Plus, the
mercury had to be kept heated at high temperatures for the delay line to function
34
correctly, meaning technicians had to handle them carefully to avoid injury. Delay lines
could also be unreliable—if the temperature was not exactly right, the delay lines would
malfunction.
Williams Tubes
As mentioned earlier, Williams tubes were used as the memory unit of the first stored-
program computer, the Manchester Baby. Stated another way, the Manchester Baby
was constructed as a means of testing the feasibility of Williams tubes. A Williams tube
is similar to a cathode-ray tube (CRT), like those used in old television sets. In a Williams
tube, a stream of electrons is repeatedly fired from one end of the tube to a
phosphorescent surface at the other end, creating a visible pattern. Besides the
Manchester Baby, Williams tubes were also used in some other early computers such
as the IBM 701 and 702.
A Williams tube from an IBM 701 at the Computer History
Museum, in Mountain View, California. By ArnoldReinhold - CC BY-SA 3.0,
https://commons.wikimedia.org/w/ index.php?curid=18340174
Core Memory
The next improvement in memory technology was core memory. The term core, in
this context, refers to a small, donut-shaped piece of magnetic material. In a core
memory system, thousands of these cores are threaded upon a lattice of crisscrossing
metal wires. When an electric current is passed through both a horizontal and vertical
wire, the core at their intersection is magnetized. This magnetization (or lack thereof) is
read as a 0 or a 1 by the computer. Core memories have the interesting property that
they retain their contents even when the computer is powered off. Since they have no
moving parts, they are faster than drum memories.
35
More importantly, the value of each individual core can be read or written with equal
speed.
Microchips
Since the introduction of integrated circuits (i.e., microchips) in the late 1950s, it had
always been possible to use chips to store information. However, core memory was
about ten times cheaper to manufacture, so there was no economic incentive to move
away from it. That changed in the early 1970s when improvements in manufacturing
allowed chips to be built inexpensively in large quantities. Since the 1970s, microchips
have been used for both memory and processors.
2.7.3 Storage
Punched Cards
All of ENIAC’s inputs were either manually plugged in or fed to the machine via
punched cards. Calculations ran through the machine and were immediately printed out
on punched cards upon completion. These cards could then be sent to a tabulating
machine to print in human-readable form or used as intermediate storage and fed back
into the machine for the next calculation.
36
An IBM punched card from the mid-twentieth century.
By Pete Birkinshaw from Manchester, UK - CC BY 2.0,
https://commons.wikimedia.org/w/index.php?curid=49758093
Disk Storage
One fundamental limitation of tapes is that they are sequential. To read a specific
piece of data, the tape’s reels must be spun forward or backward to position the
appropriate section of tape beneath the read/write head. The advent of disk storage
changed this. Unlike tapes, disks offer random access to data. The read/write head can
be moved back and forth along the radius of the disk, making any sector of the disk
quickly accessible.
The floppy disk became an important part of the personal computer revolution of the 1980s
37
2.8 Software
Parallel to the innovations in computer hardware during the 1950s and 1960s, a no
less significant revolution was taking place in software engineering.
Machine Code
The original design for the EDVAC, written up by von Neumann in 1945, included a
more elegant way of programming the computer. Each possible instruction supported
by the machine would be assigned a unique numeric code. To create a new program, a
person would map out a list of instructions required by a given algorithm and look up
their individual codes. The codes would then get loaded (via paper tape or punched
cards) into the computer’s memory.
It didn’t take long for people to realize that this process could be automated. In
Germany, Konrad Zuse designed a device called the “plan preparation machine” into
which a programmer could enter instructions in mathematical notation.
The plan preparation machine would then translate the programmer’s instructions
into numeric codes and transfer them to tape, which could then be fed into the computer.
However, while using the Z4 computer at the Federal Technical Institute in Zurich, a
computer scientist named Heinz Rutishauser came to an important realization: any
general-purpose computer could be programmed to do almost anything. In the words of
Rutishauser, “Use the computer as its own Plan Preparation Machine.”3
Assembly
Preparing a computer program in this way would take two steps. First, the
programmer would write out the algorithm using simple mnemonic instructions called
assembly language. These instructions would then be fed into another program (which
came to be known as an assembler), which translated the mnemonics into numeric
codes. This finished program would then be run by the computer. Assemblers were soon
made available for most computer models.
FORTRAN
The first high-level programming language to receive widespread use was created
not at UNIVAC, but at IBM. In 1953, a researcher at IBM named John Backus, who at
the time was only twenty-nine years old, made the realization that as much as 75 percent
of the overall cost of running a computer was in paying the programmers to write, test,
and debug their programs. He persuaded his superiors at IBM to give him the time and
resources to lead a team to develop a new compiler, one that would generate machine
codes that were just as efficient as those written by humans.
39
Backus called his system FORTRAN, which was short for Formula Translator.” In
addition to generating efficient code, Backus also wanted the language to be expressive,
so that a single FORTRAN statement would correspond to many machine instructions,
thus allowing the programmer to focus on the big picture rather than getting bogged
down in the details of the machine. FORTRAN employed a syntax similar to simple
algebra; for example:4
X = 2.0 * COS (A + B)
This would compute the sum of the values stored in variables A and B, calculate the
cosine of the result, multiply that by 2, and then store the final quantity into variable X.
This would require many individual binary instructions if coded by hand but could be
accomplished with one single FORTRAN command.
Backus expected his team to finish the FORTRAN compiler in six months, but in the
end, it took over three years. Part of the delay was due to Backus’ insistence that codes
generated by FORTRAN be just as efficient as those written by humans. The first
version of FORTRAN was released in April 1957 for the IBM 704 computer. True to its
promise, programs written in FORTRAN were, on average, 90 percent as fast as their
hand-coded counterparts—but they were much, much faster to write. A program that
would have required days or weeks to write using only machine code could be written
by a programmer using FORTRAN in hours or days. By the end of 1958, most IBM 704
installations were using FORTRAN in their projects.
Although Hopper was not part of COBOL’s design committee, the language was in
part inspired by the FLOW-MATIC compiler she had created at UNIVAC. (Although the
FACT and COMTRAN languages never gained any traction, some of their concepts
found their way into COBOL as well.) In any case, Hopper was definitely a strong
proponent of COBOL and encouraged its wide adoption by programmers.
Perhaps Hopper’s most visible influence in the design of COBOL was in its favoring
English-like syntax over algebraic syntax. For example, to subtract the values of two
variables and store the result in a third variable in FORTRAN, one might do this: 5
41
This similarity to English was intended to make program code easier to read by non-
technical people, i.e., managers. This was both a blessing and a curse, as is shown in
the following anecdote.
Hopper once recounted how she developed a version of FLOW-MATIC in which she
replaced all the English terms, such as “Input,” “Write,” and so on, with their French
equivalents. When she showed this to a UNIVAC executive, she was summarily thrown
out of his office. Later on she realized that the very notion of a computer was threatening
to this executive; to have it “speaking” French—a language he did not speak—was too
much6
Both FORTRAN and COBOL would dominate scientific and business programming
throughout the 1960s and 1970s, with over 90 percent of application programs being
written in these languages. Although they are still in use today, both FORTRAN and
COBOL are not as popular now as they once were. Most application software today is
written in languages such as C++, Java, and Python, all of which can trace their ancestry
to the C language developed in the early 1970s at Bell Labs. We’ll look at the C language
and its origins later in this resource guide.
LISP
We will close our discussion of programming languages with one more language that,
while never quite as popular as FORTRAN or COBOL, was nonetheless quite influential.
The LISP (short for List Processing) language was developed in 1958 at MIT, thus
making it slightly younger than FORTRAN and slightly older than COBOL. In contrast to
FORTRAN’s emphasis on efficiency and COBOL’s emphasis on readability, LISP
focuses on manipulating lists of symbols. LISP employs a distinctive “nested” syntax
(that (uses (lots)) (of parentheses!)), and it has since found a niche for programming
artificial intelligence application software.
42
2.9 IBM SYSTEM/360
In the 1950s, there was an implicit assumption among computer manufacturers that
“science” customers and “business” customers had fundamentally different needs.
Scientists, it was maintained, would do complex mathematics on relatively small data
sets.
Business people, on the other hand, would do simple mathematics on large data sets.
In practice, this was not always true, but this assumption shaped the early landscape of
computing. As we have just seen, the two most dominant programming languages of
the 1960s, FORTRAN and COBOL, were targeted for science and business respectively.
This distinction was evident in computer models as well: both IBM and UNIVAC
created entirely separate machines for science and business use. For example, the IBM
701 and 704 were intended for scientists, while the IBM 702 and 705 were intended for
business. By and large, however, they were nearly identical machines—so why the
distinction between science and business? The difference was floating-point arithmetic.
Floating-point arithmetic allows computers to represent both very large and very
small numbers with a high degree of precision, meaning the numbers may have several
digits to the right of the decimal point. IBM’s science-oriented 704 machine had
dedicated hardware that could perform floating-point arithmetic—making these types of
calculations very fast. In contrast, the business-oriented 705 lacked such special-
purpose hardware. Machines without floating-point hardware could still perform floating-
point arithmetic, but they did so more slowly. The assumption was that business users
would not need the added precision of floating-point hardware and did not want to pay
for it.
By 1964, however, the decreasing costs of hardware, coupled with the added
complexity of marketing different machines to different customers, led IBM to unify its
product line to a single computer platform intended for both scientific and business use:
the IBM System/360. The name “360” comes from the number of degrees in a circle,
implying that the System/360 spanned the “full circle” of customers’ needs.
It is impossible to overstate the importance of the System/360. To understand its
significance, consider the state of IBM’s product line in the 1950s and early 1960s. There
43
was the 650 as well as the 701, 702, 704, 705 family, as discussed earlier. In 1959, IBM
introduced the 7090 and 7094 computers, which used core memory and transistors and
were intended for scientific use. The 1401, introduced in 1960, also used core memory
and transistors, but was a smaller, less-expensive machine intended for business
applications.
For the most part, these different computer models were all incompatible with each
other, meaning a program written for one model of computer would not run on another
model. This meant that customers migrating from one IBM machine to another would
have to scrap their existing application software and rewrite it from scratch. This made
software increasingly expensive to maintain and decreased customers’ willingness to
adopt new machines.
IBM’s solution, then, was to create a “family” of computers, all at the same time.
There would be inexpensive machines for customers with smaller needs; midrange
machines with better performance; and fast, expensive machines for the most
demanding customers. IBM initially announced six models of System/360 in April 1964,
although additional models would come later. There was a speed difference of twenty-
five to one from the fastest 360 model to the slowest.
Most importantly, each of these machines would be machine-language compatible
with one another, meaning a program written for one machine would run, without
recompilation, on any other. In this way, a small business could start with a modest
inexpensive 360 installation and then upgrade to a larger, faster machine as the
business grew, without having to reinvest in new application software. Today, of course,
we take this compatibility for granted: when you purchase a new laptop or phone, you
assume that your old apps will still work, albeit perhaps faster. But in the 1960s, this
was a bold idea. An IBM spokesman perhaps said it best:
We are not at all humble in this announcement. This is the most important product
announcement that this corporation has ever made in its history. It’s not a computer in
any previous sense. It’s not a product, but a line of products . . . that spans in
performance from the very low part of the computer line to the very high.7
Fred Brooks, IBM manager, c. 1965.© International Business Machines Corporation (IBM)
Brooks’s Law
In 1975, Brooks wrote a famous book titled The Mythical Man-Month, a collection of
essays in which he explains lessons learned from working on the OS/360 project. To
this day, it is still one of the most influential books on software development ever written.
Near the start of the book, Brooks sets forth his memorable axiom, Brooks’s Law:
Adding manpower to a late software project makes it later.8
How is this so? After all, adding more workers to a construction project, or to
harvesting a farmer’s field, usually speeds things up. The reason is that programming
requires primarily mental, not physical, effort. For example, in order to divide the work
of writing a section of code from one programmer to two programmers, the first
45
programmer must first thoroughly explain the logic of the initial design to the second
programmer. The first programmer will also need to be available to answer questions
from the second programmer as the work continues. All this communication takes time
away from the actual creation of code. Multiply this phenomenon across an entire team
of dozens or hundreds of programmers, and the lines of communication become so
complex that it’s hard for anyone to get any “real” work done.
The issues faced by the OS/360 team were one of the motivations behind an
industry-wide movement, beginning with the first NATO-sponsored software
engineering conference in 1968. Over the next couple of decades, a unified framework
of best practices for developing software was gradually created and continues to be
refined today. Such practices are now included in university-level computer science
curricula and are codified in many textbooks.9
The System/360 was so successful that it set the course for IBM’s fortunes well into
the 1990s. Even today, IBM continues to manufacture new computers that are binary-
compatible with the original 360 for niche markets. This runaway success, however, had
its drawbacks. IBM was so focused on maintaining and improving its System/360
platform that it was slow to embrace other innovations, such as the advent of the
personal computer in the 1970s. (IBM did produce a very successful personal computer
in 1981, but it was created quickly using off-the-shelf hardware and software, rather
than in-house. We’ll discuss the IBM PC in more detail later in this resource guide.)
46
2.10 The ENIAC Patent Case
The ENIAC I
Earlier in this resource guide, we mentioned that Eckert and Mauchly’s original patent
on the ENIAC was not finalized until 1964. By this time, of course, the ENIAC was long
obsolete and had been supplanted by newer machines. However, the current corporate
owner of the patent, Sperry Rand, claimed that the patent covered not just the ENIAC,
but all electronic computers in general. Patent in hand, Sperry Rand demanded royalty
payments from the other computer manufacturers. This, of course, did not sit well with
Sperry Rand’s competitors and led to an acrimonious legal battle, which finally ended
with a 1973 decision by Judge Earl Larson, striking down the ENIAC patent and placing
the invention of the computer firmly in the public domain. As a result, anyone was free
to create and sell computers without paying royalties to Sperry Rand. This opened the
field of computing for additional competition, allowing it to flourish throughout the 1970s
and 1980s and beyond.
What happened to the rest of the BUNCH? In 1986, Burroughs and Sperry merged
to form Unisys, a name reminiscent of the old UNIVAC product line. Unisys continues
to make computers, although today it is a much smaller company than it once was. NCR
is still around, and—true to its roots as a cash register company— it manufactures point
of sale devices for retailers. CDC has disintegrated, and its divisions are now owned by
a variety of companies. Honeywell is still in the electronics business, although it no
longer makes computers.
The computer landscape has changed a great deal from the 1950s and 1960s. As
we shall soon see, the single largest agent of that change came not from established
players like IBM and Sperry Rand, but rather from a newcomer called Digital Equipment
Corporation, and its most famous product was the minicomputer.
47
2.11 Progress in China
In 1946, the famous Chinese mathematician Hua Luogeng learned during his study
tour in the United States that the world's first general-purpose electronic computer
"ENIAC" was announced at the University of Pennsylvania. At that time, he had a dream
buried in his heart: China must not lose the great opportunity to research in computers
and realize the country's "computing freedom.
In 1950, Hua Luogeng returned to China, and the next year, he served as the director
of the former Institute of Mathematics of the Chinese Academy of Sciences. In 1952 he
identified three talented young scholars from Tsinghua University: Min Naida, Xia Peisu,
and Wang Chuanying. They formed a young "reclamation team" and began an arduous
journey of computer research. From designing basic circuits to writing planning reports,
every step is an exploration of the unknown.
The computer team discovered that developing a computer not only requires
knowledge from multiple disciplines, such as mathematics, electronics, and physics, but
also requires the transformation of the theoretical knowledge into technology for
48
engineers to implement. In the winter of 1953, the Chinese Academy of Sciences
decided to mobilize the entire institute and temporarily arrange electronics personnel
from various units to the Institute of Physics of the Chinese Academy of Sciences. Many
scholars believe that these works have laid a solid technical foundation for the
development of the "103 machine". During the following years, more talents were
gathered from all over the country.
In September 1956, China sent a senior expert delegation to the Soviet Union for a
comprehensive inspection of computer research and development, manufacturing,
teaching, application of related technologies. For over two months, the delegation
observed and studied the related computing technology in Moscow and Leningrad,
focusing on the M-20 computer. However, after returning to China, Chinese scientists
learned that the debugging of the M-20 computer was not going well. The Preparatory
Committee of the Institute of Computing believed that it was safer to imitate other
computers that were already mature. As a result, the M-3 small computer came into their
sight.
In order to carry out research and development work in a more orderly manner, the
engineering team is divided into a power supply group, arithmetic unit group, controller
group, magnetic drum group and input and output group. The five groups not only
perform their own duties, but are also closely linked.
The M-3 machine was the first generation of electronic digital computers. It used
approximately 800 electron tubes, 2,000 copper oxide diodes, and 10,000 resistor-
capacitor components, which were divided into 400 plug-ins and inserted into three
cabinets. Among them, the main machine is an extra wide type, with approximately
10,000 contact points and 50,000 welding points in the entire machine.
The memory made of magnetic drums has a capacity of 1024 words, a word length
of 32 bits, and an operation speed of 30 times per second. Later, magnetic cores were
used as memory, and the operation speed increased to 2500 times per second. To
imitate such a complex electronic "brain", drawings and data alone are far from enough.
After Chinese scientists practiced based on the Soviet drawings, various technical
problems surfaced, and the development work still needed to start from "0".
After all the hard work, at the end of July 1958, the adjustment of "103 machine" was
completed, five months ahead of schedule. China's computing technology is no longer
a blank subject." the "People's Daily" wrote in the report.
49
Section II Summary
Prior to World War II, all computers were electromechanical and could only perform certain
predefined types of calculations.
l The need for firing tables during World War II provided the impetus to develop the first fully
electronic general-purpose computer, the ENIAC.
l A description of the ENIAC’s intended successor, the EDVAC, was written by John von
Neumann and widely distributed.
l After the war, many computers were built based on the “stored program” design described
by von Neumann.
l The creators of the ENIAC started the first computer business: the Eckert-Mauchly
Computer Corporation.
l The first commercially produced computer was the BINAC, followed by the UNIVAC.
l In a famous publicity stunt, a UNIVAC computer predicted the outcome of the 1952 U.S.
presidential election on live television.
l After a late start, IBM soon passed UNIVAC in sales.
l IBM’s competitors were known as the “Seven Dwarfs” and later the “BUNCH.
l Innovations in hardware helped make computers faster and cheaper
n Processors: from vacuum tubes to transistors to integrated circuits
n Memory: from vacuum tubes to delay lines to drums to core to integrated circuits
n Storage: from tapes to disks
l Innovations in software helped make computers easier to program:
n Assemblers
n Compilers and high-level languages
u Examples: FORTRAN, COBOL, LISP
l IBM System/360
n A “family” of compatible computers at different price points
n Cemented IBM’s industry leadership
n OS/360 was the flagship operating system for the IBM System/360.
u Famously bug-ridden and behind schedule
u Brooks’s Law: Adding manpower to a late software project makes it later.
l The ENIAC patent was invalidated in 1973.
n The invention of the computer was placed in the public domain.
n Anyone was free to make and sell computers.
l PROGRESS IN CHINA
n Hua Luogeng returned to China in 1950
n A young "reclamation team" was formed, and it began an arduous journey of computer
research
n the Chinese Academy of Sciences decided to mobilize the entire institute
n In September 1956, China sent a senior expert delegation to the Soviet Union for a
comprehensive inspection of computer research and development, manufacturing,
teaching, application of related technologies.
50
n M-20 computer was not going well. As a result, the M-3 small computer came into their
sight.
n China’s first general-purpose digital electronic computer--the 103.
l Notable pioneers:
n John Mauchly and J. Presper Eckert: inventors of the ENIAC, the first general-
purpose electronic computer
n Herman Goldstine: Army officer who helped secure funding for the ENIAC
n John von Neumann: famous mathematician who joined the ENIAC team as a
consultant and wrote a widely distributed report on the “stored program” concept
n Frances Bilas Spence, Jean Bartik, Ruth Lichterman Teitelbaum, Kathleen
McNulty, Elizabeth Snyder Holberton, and Marlyn Wescoff Meltzer: the ENIAC’s
original six programmers, all women
n Grace Hopper: computer programmer who invented the first compiler and promoted
the use of high-level languages
n John Backus: designer of FORTRAN, the first successful high-level programming
language
n Fred Brooks: manager of the OS/360 project; coined Brooks’s Law
n Robert Noyce and Jack Kilby: inventors of the integrated circuit; Noyce went on to
become one of the founders of Intel Corporation.
51
Section II Glossary
53
03
Section III
Toward “Personal” Computing
54
3.1 Project Whirlwind
In 1944, near the end of World War II, the U.S. Air Force contracted with the
Servomechanisms Laboratory at MIT to build a general-purpose simulator. Jay
Forrester was put in charge of the project. Within a year, he realized that his analog
design would not be fast enough for the immediate response time required of a flight
simulator. Around this time, knowledge of the ENIAC project—and its proposed
successor, the EDVAC—was slowly being disseminated outside the core project team.
Forrester realized that only digital circuits, like those pioneered by the ENIAC, would
provide the speed his system demanded. He asked for, and was granted, additional
funding to design a general-purpose digital computer as the basis for the flight simulator.
Project Whirlwind computer elements: core memory (left) and operator console.
Over the next few years, Forrester’s team focused on creating the fastest computer
possible. When the U.S. military learned that the Soviet Union—the United States’ rival
in the new “Cold War”—had developed nuclear weapons and aircraft capable of
transporting them over North America, Funding for computing projects instantly
increased, and the military was on the lookout for a computer to use as part of a newly
proposed air defense system. Since Project Whirlwind was already in progress, it was
allowed to continue but with a new focus on air defense.
By 1951, Whirlwind was operational. Two years later, its Williams tube-based
memory system was replaced with core memory, “making Whirlwind by far the fastest
computer in the world and also the most reliable.”
3.2 SAGE
After developing Whirlwind, MIT transferred the technology to IBM, which marketed
it as the IBM AN/ FSQ-7 computer. Dozens of AN/FSQ-7 computers were built, and
these formed the basis of the Air Force’s SAGE air defense system, which was spread
across twenty- three sites. The SAGE system was operational for about two decades
until it was finally decommissioned in the early 1980s.
55
3.3 SABRE
During the 1950s, IBM used the experience it gained with the SAGE system to
develop an interactive reservations system for American Airlines. Prior to this time, all
airline reservations were done manually, with hundreds of workers updating paper lists
of flights and passengers throughout the day. From 1957 to 1960, IBM and American
Airlines worked on developing an interactive reservations system patterned after SAGE.
The resulting system, named SABRE, was delivered to American Airlines in 1960 and
was fully operational by 1964.
3.4 Timesharing
Systems like SAGE and SABRE gave the world its first taste of interactive
computing, but only customers with big budgets could afford it. However, a few pioneers,
such as John McCarthy at MIT, believed that computers could be made available to a
wider audience through a process called timesharing. In 1959, McCarthy proposed that
multiple keyboard terminals could all be connected to a central computer, allowing
several users to access the computer at the same time.
McCarthy envisioned a day in which computing would be available on-demand as
a public utility service, much like water or electricity, in which users pay only for what
they use.
56
CTSS
Fernando Corbató led a team atMIT that built a timesharing system called CTSS
(compatible time-sharing system) in 1963
As a first step in this direction, in 1963, a team at MIT led by Fernando Corbató built
a timesharing system called CTSS (compatible time-sharing system). It consisted of an
IBM 7090 computer attached to a 28-megabyte disk drive. Each user was allocated part
of the disk, called a directory, where they could store their data.
Users of CTSS developed a variety of tools for the system, including text editors,
which allowed documents to be prepared electronically. By 1965, CTSS also supported
electronic mail, allowing users to send messages to other CTSS users. CTSS was finally
decommissioned in 1973.
One of the primary forces in this reduction in computer costs was a company called
Digital Equipment Corporation (DEC).
The PDP-1
DEC’s first computer, the PDP-1, was released in 1960. The abbreviation PDP is
short for “Programmed Data Processor.” The PDP-1 used both transistors and core
memory. Although the PDP-1 was just as powerful as computers from other firms, it was
priced far cheaper.
The PDP-8
Unlike most other computers of that era, the PDP-8 was “only” the size of a
refrigerator. Whereas the large, room-filling machines sold by IBM and others were
typically known as mainframes, the PDP series and its eventual competitors became
known as minicomputers. The PDP-8 was a commercial success; DEC went on to sell
over 30,000 of them over the next ten years.
57
The PDP-8 represented an entirely new category of computers that became known as
minicomputers. By Morn - CC BY-SA 3.0, https://commons.wikimedia.org/w/index.
php?curid=36079838
The PDP-11
The PDP-11 was the quintessential minicomputer. DEC sold over 170,000 PDP-
11s throughout the 1970s, and production continued well into the 1990s. It is difficult to
overstate the impact the PDP-11 had on the trajectory of computing.
● The operating system that DEC created for the PDP-11 (called RSTS-11)
supported time- sharing. This allowed organizations to run their own time-sharing
operation in-house, rather than subscribe to a remote timesharing service.
● The RSTS-11 operating system also included a modified version of the BASIC
programming language. Programmers went on to create a large library of BASIC
software, including many simple computer games, that were widely shared. 10 (Calling
them “video games” would be premature, as most users interacted with a PDP-11 using
a teletype printer rather than a display screen.)
● The C programming language, created in 1972 at AT&T Bell Labs, was originally
written on a PDP-11 computer. The C language influenced the design of many other
popular programming languages, such as Objective C, C++, Java, and Python.
58
APDP-11 on exhibit at the Vienna Technical Museum.By Stefan_Kögl - CC BY-SA 3.0,
https://commons.wikimedia.org/w/index.php?curid=466937
C Programming Language
Two Bell Labs employees, Ken Thompson and Dennis Ritchie, decided to create a
new operating system called Unix. The first version of Unix was written in assembly
language. It lacked many features, but it was stable enough.
As Thompson and Ritchie rewrote and improved Unix for the PDP-11, they made
an important decision that would eventually prove key to the widespread adoption of
Unix. Rather than write the system in assembly language, Ritchie invented a new
programming language called C, which they used to implement the operating system.
C had a minimalist design compared to FORTRAN or COBOL, providing a few high-
level capabilities while still granting programmers a great deal of low-level access to the
computer’s memory.
59
Unix was fast and powerful; however, AT&T (the owner of Bell Labs) was hesitant
to commercialize it. At the time, AT&T was a government-regulated monopoly and was
not allowed to enter into any venture not directly related to its core business of telephony.
Thus, AT&T opted to sell copies of Unix (including its C source code) to universities at
very low cost. As a result, Unix installations proliferated at universities. Since AT&T’s
flexible license allowed users to modify the source code, many universities—in particular
the University of California at Berkeley—added their own improvements to Unix.
(Berkeley’s version of Unix came to be known as “BSD Unix,” short for Berkeley
Standard Distribution.) Much of the work of improving Unix was done by students, giving
them excellent training in computer science. Naturally, many students who had used
Unix wanted to continue to use it upon entering the workforce, and thus, Unix crossed
over from academia to the business world.
3.6 Networking
Unix’s slim design made it very suitable for the nascent field of computer networking.
The idea of networking computers together did not start with Unix, although Unix likely
accelerated it. The first timesharing systems of the early 1960s could be considered
forerunners of computer networks; however, these consisted primarily of teletype
terminals connected to a single powerful mainframe. The modern notion of networking,
in which entire computers—not merely terminals—are connected together, began with
J. C. R. Licklider in 1963.
At the time, Licklider was the director of the Information Processing Techniques
Office (IPTO) of the Advanced Research Projects Agency (ARPA), an organization
sponsored by the U.S. government.
60
Licklider hoped that by networking expensive computers together, researchers
would be able to utilize them more economically. The users of each computer would
also have access to every other computer on the network, reducing the need for
specialized facilities at each site. The difference in time zones across the United States
was another benefit: workers on the East Coast would be able to log on to West Coast
machines hours before the local staff began their workday. In the evenings, West Coast
workers could similarly have access to machines in the East after eastern workers had
gone home.
One key difference between the ARPANET and earlier timesharing services was
the ARPANET’s sheer heterogeneity. Timesharing systems like CTSS and Dartmouth
BASIC were designed around a single type of computer. In contrast, the various
computers (called “hosts”) on the ARPANET came from a variety of vendors and ran a
variety of operating systems. So how could communication flourish amid such a
potpourri of different hardware and software? The initial solution was to use an
inexpensive minicomputer, called an interface message processor (IMP), at each site.
The IMP (sometimes called a node) served as an intermediary between the host
computer and the rest of the network. As long as the IMPs all ran the same systems
software, the host itself could be any computer.
61
telegrams were printed on paper at each office, so they could be stored until the operator
was ready to forward it on—hence the term “store and forward.”
The “block message” suggested by Paul Baran in 1964 was the first proposed data
packet. By Michel Bakni - CC BY-SA 4.0,
https://commons.wikimedia.org/w/index.php?curid=95110177
In order to prevent one single user (or a large transmission) from monopolizing the
network and delaying other messages from getting through, each message was broken
up into small chunks, called packets. Each packet would contain the address of its
destination computer. As each packet reached a node in the network, it would check the
address and forward it on as needed: hence the term “packet switching.”
Once all the packets from a particular transmission reached their destination node,
it would re-assemble the packets back into the original message. Since all the network
traffic was divided into small packets, no one single message would monopolize the
network’s bandwidth. This also ensured that the expensive telecommunications lines
were used efficiently.
ARPANET
With the basic technology in place, the leaders of ARPA began work on the new
network in earnest. They called it the ARPANET, and at first it consisted of only four
nodes at the following sites: the University of California at Los Angeles, the University
of California at Santa Barbara, Stanford Research Institute, and the University of Utah.
More nodes were added over the next few years. By the late 1970s, over one hundred
sites were connected to the ARPANET.
Vint Cerf and BobE. Kahn were awarded the Presidential Medal of Freedom by President
George W. Bush in 2005. Cerf and Kahn designed the architecture and communication
62
Over the next few years, the need for a separate IMP protocol that gave rise to the
modern Internet. at each site was eliminated with the introduction of TCP/IP
(Transmission Control Protocol/Internet Protocol), developed in 1974 by Vint Cert and
Bob Kahn. A protocol is an agreement on how data ought to be formatted, encoded, and
transmitted. Software implementing TCP/IP was written for various computer models.
This allowed each host to connect to the ARPANET directly, rather than through an IMP.
Today’s global Internet still uses TCP/IP. Although being able to access computers
remotely was the initial motivation for creating the ARPANET, there was another
unexpected tool that really fueled its growth. That tool was electronic mail, later known
simply as email. Electronic mail was not necessarily a new idea—email was already part
of the CTSS system at MIT, and most other timesharing services as well.
However, its usefulness was limited because it could only be used to communicate
with colleagues within the same organization. The first electronic mail program on the
RPANET was made available in 1971, and by 1975, it was the network’s single biggest
source of traffic.
Usenet
The desire to use email also drove the creation of other computer networks. In 1980,
a group of universities that were not part of the ARPANET created a network called
Usenet. The computers linked by Usenet were primarily minicomputers running the Unix
operating system. The same year, ARPA began recommending Unix as the operating
system of choice for computers in the ARPANET due in part to its portability across a
wide variety of computer models. As a result, Unix has been associated with computer
networking ever since. Usenet included an email service, of course, but also pioneered
another service called newsgroups that allowed users to post public messages to
various online forums categorized by topic. Newsgroups filled a role similar to today’s
social networking services by giving people with common interests a sense of
community.
63
This graph of packet traffic on the NSFNET Backbone from January 1988 to June 1994 shows the
tremendous growth in the use of the network.
Although many of the technologies behind modern networking had originated with
the ARPANET, by this time NSFNET was larger and better funded.
ARPANET was officially decommissioned in 1990, and its sites transitioned into
NSFNET. Over the next few years, the National Science Foundation, which originally
restricted access to NSFNET to government and university users only, gradually opened
it up to commercialization, which led to the phenomenal growth of the Internet in the
1990s.
The MinitelNetwork
In addition to the global Internet, some countries were also working on creating their
own national computer networks. The most successful was the Minitel network in France.
In the 1970s and 1980s, the French government made heavy investments in improving
the national telecommunications network. For example, in 1970, only 8 percent of
households in France had a telephone; by 1989, that number had climbed to 95 percent.
In part to reduce the costs associated with printing paper phonebooks, the French
government distributed millions of “Minitel” terminals. Users could look up phone
numbers on the Minitel, read news, check the weather, order train tickets, and access
over a thousand other online services. “The Minitel was in a large proportion of French
homes 10 years before most Americans had ever heard of the Internet.” 11
64
A MinitelI terminal built in 1982.
CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=74723
ALOHAnet
The first wireless computer network had its origins in a somewhat unlikely place:
Hawaii. In 1969, Norm Abramson, a professor at the University of Hawaii, realized that
Hawaii’s landscape, with its mountainous islands separated by miles of ocean, made it
difficult to build a traditional computer network. In order to exchange data with computers
on different islands, Abramson devised a packet-switching network called ALOHA net,
in which packets were transmitted via radio signals rather than over telephone lines.
The ALOHA net is significant for at least two reasons: first, it was the first packet-
based radio network; second, it helped inspire the invention of Ethernet, the technology
behind most local high-speed wired networks today.
Norm Abramson was an engineer and computer scientist who developed the ALOHAnet system in which
packets were transmitted via radio signals rather than over telephone lines
In the mid-twentieth century, Xerox was the market leader of the copy machine
industry. However, by the 1970s, Xerox’s patents had expired, and it was facing
competition from less-expensive rivals. Hoping to maintain its lead in office machinery
through computers, in 1970 Xerox founded a world-class research center to develop
computer technologies that Xerox could then spin off as products.
Robert Taylor, a former director at ARPA, was hired to manage the day-to-day
affairs of the lab, which was known as the Palo Alto Research Center, or PARC. Due to
his experience at ARPA, Taylor was acquainted with some of the best minds in computer
science, and he succeeded in hiring many of them to work with him. Although they
initially tried to develop a timesharing system, eventually, the PARC team settled on
what was, at the time, a revolutionary idea: rather than have multiple users share a
single large computer, each user would have their own dedicated machine—a personal
65
computer. In the early 1970s, personal computing was not economically feasible;
however, PARC researchers predicted that hardware prices would drop by the time their
prototype machine was complete, and then Xerox could mass-produce it.
Robert Taylor, a former director at ARPA, was hired to manage the Palo Alto Research Center, or PARC.
The Alto
The PARC team called their computer the Alto. The Alto was different from virtually
every other computer that preceded it. First, instead of printing output directly to a printer,
the Alto had a monitor with a rectangular display screen that was large enough to display
the contents of a standard sheet of typing paper. The screen was bitmapped, meaning
it could display both text and pictures simultaneously. In addition to a keyboard, the Alto
also had a three-button mouse. With a bitmapped display screen that could display
images and text, the next logical step was to create a printer that could print the contents
of the screen onto paper. Existing printers used either daisy wheel or dot matrix
technology. A daisy wheel printer operates like a typewriter: by striking diecast letters
against a ribbon of ink. It produces nice-looking documents but cannot print images. A
dot matrix printer works by striking many tiny pins against an ink ribbon. PARC
researchers designed the first laser printer, which uses laser beams, rather than metal
type or pins, to control the placement of ink on the page. Laser printers can print
documents containing both text and images with very high precision.
Xerox’s Alto computer was different from virtually every other computer that preceded it
66
When the first Alto was finished in 1973, it had all the recognizable peripherals of a
modern desktop computer: keyboard, mouse, monitor, and printer. Within a year, forty
Alto machines had been built and were being used internally by PARC researchers.
However, users of the Alto soon noticed a deficiency in the “one computer, one user”
model—it was difficult to share data with other users. In a timesharing system, sharing
files with other users was simple since all the data was stored on a central computer.
PARC researchers wanted to network the Alto machines together, but existing
networking techniques used in the ARPANET required long-distance telecommunication
lines with “store and forward” equipment at each site.
Bravo
The PARC team created the first graphical user interface (GUI)-based word
processor as well. Called Bravo, it took advantage of the bitmapped display screen to
let users see what the document would look like (including fonts, images, and layout)
while they edited it. The PARC team even invented a cute acronym for this concept:
WYSIWYG (pronounced “whizzy-wig”), which stands for what you see is what you get.
We take WYSIWYG for granted today, but at the time it was unheard of. In 1975, PARC
produced an updated version of Bravo, called Gypsie, that was even easier to use.
Xerox tested the program with employees of a textbook publishing company, who loved
it.
67
In addition to creating the GUI, the PARC team also designed new tools for the Alto
to make developing software easier. Alan Kay, a PARC researcher, invented a
programming language called Smalltalk.
Most programming languages of that era, such as FORTRAN, focused on writing
functions: blocks of instructions that accepted data as input, performed some operation,
and returned modified data as the result. Smalltalk turns that around, through a principle
that we today call object-oriented programming.
In object-oriented programming, programmers first create the data structures,
called classes, and then design procedures, called methods, that operate on instances,
called objects, of those classes. Object-oriented programming helps programmers keep
their code well organized, thereby enabling the creation of larger, more robust programs.
Around the same time, Barbara Liskov at MIT was developing another early object-
oriented programming language called CLU. Although Smalltalk and CLU are little used
today, the principles of object-oriented programming they pioneered have made their
way into many of today’s most popular programming languages, such as Java, C++,
and Python.
A portion of the brochure for the Xerox Star 8010/40 system. By Scanned (2007-12-12)
from the undated (c. 1985-87) 8010/40 brochure distributed by Rank Xerox to promote this
product., Fair use, https://en.wikipedia.org/w/index.php?curid=14674753
One innovation in particular that made small, inexpensive computers possible was
the microprocessor. In the previous section of this resource guide, we noted that the
invention of the integrated circuit, or microchip, enabled computers to become smaller
because many transistors could fit into a single chip. In the late 1960s, Robert Noyce,
one of the inventors of the integrated circuit, was working at Fairchild Semiconductor, a
company he had co-founded several years earlier. After becoming disillusioned with
Fairchild, in 1968 Noyce persuaded a few of his co-workers to join him in starting a new
company that would specialize in designing and manufacturing microchips. The
company, which Noyce co-founded with fellow engineer Gordon Moore, was named
Intel (a portmanteau of integrated electronics).
69
Robert Noyce (left) and Gordon Moore in front of the Intel SC1 building in Santa Clara in
1970. By Intel Free Press - CC BY-SA 2.0,
https://commons.wikimedia.org/w/index.php?curid=27929328
The invention of the microprocessor opened the door for a wave of small cheap
computers, which came to be known as personal computers. One highly influential
personal computer was the Altair 8800. The Altair 8800 did not come from an
established firm like IBM or DEC, but from an unknown startup, MITS, in Albuquerque,
New Mexico.
70
The Altair 8800 was announced in the January 1975 issue of Popular Electronics
magazine. It was advertised as a minicomputer that cost less than $400. The Altair's
low cost was in part due to its use of off-the-shelf parts. MITS chose the Intel 8080 chip.
Bill Gates (right), with Paul Allen, seated at Teletype Model 33 ASR terminals at the
Lakeside School in 1970.
In addition to boosting Intel's fortunes, the Altair also helped to kickstart one of the
most successful software companies of all time: Microsoft. Bill Gates was a student at
Harvard College when he saw the Popular Electronics article about the Altair
8800.Realizing that its users would need a programming language to develop software
for the new machine, Bill Gates and his friend Paul Allen quickly developed a version of
BASIC that was compatible with the Altair's Intel 8080 processor. Microsoft's BASIC was
both fast and powerful, and it grew in popularity. Soon Microsoft licensed their BASIC to
other computer manufacturers as well.
The Commodore PET 2001-8 (left) alongside its rivals, the Apple II and the TRS-80 Model
I. By Tim Colegrove - CC BY-SA 4.0,
https://commons.wikimedia.org/w/index.php?curid=79216985
71
Finally, each of these computers came with BASIC pre-installed, meaning users could
easily start writing their own programs for them.
The TRS-80
Radio Shack, the company that made the TRS-80, was already an established
electronics retailer in 1977 and was able to use its vast network of stores to sell and
service the TRS-80, giving the computer a wide audience.
The Apple II
The Apple II was designed by computer pioneer Steve Wozniak, with input from
Steve Jobs. In spite of the Roman numerals in its name, the Apple II was the company’s
first mass-produced computer. Unlike the TRS-80 and the Commodore PET, the Apple
II could display color graphics, which made it ideal for developing and playing video
games. (Radio Shack’s and Commodore’s later computers also supported color
displays.)
72
use outside China. The firm prepared for international expansion with the
announcement of the new Lenovo name and logo.)
Although not a “serious” application, computer games were one of the motivating
reasons for people to purchase home computers in the late 1970s and early 1980s.
Programmers had created a variety of games for minicomputers like the PDP-11, but
the high cost of these machines limited their audience. In contrast, personal computers,
with their small size, low cost, and interactive video screens, provided an ideal platform
for video games to flourish. Some of the first games to appear for personal computers,
such as Adventure and Zork, were straightforward ports of older text-based games that
had been written for minicomputers; however, soon new games appeared that took
advantage of personal computers’ graphics capabilities. Some of the most popular
games, such as Pac Man and Centipede, began as dedicated arcade consoles before
being ported to personal computers.
73
Some of today’s most respected game publishers, such as Activision and Electronic
Arts, got their start in the early home computer era.
Interestingly, some of the earliest video games were not written for existing
computers at all, but were custom computers in and of themselves. For example, the
logic for the first commercially successful coin-operated video game console, Pong,
created by Atari in 1971, was not written in software, but was instead created by wiring
several microchips together. In this way, designing Pong was not unlike programming
the ENIAC. Another Atari game, Breakout, was also created using custom hardware.
However, as general-purpose microprocessors became more common in the 1970s,
game designers realized that they could write more and better games in less time using
software running on commodity hardware, rather than creating a customized set of chips
for each game. While they are perhaps best remembered as a gaming company, Atari
also branched out into general-purpose computing in 1979 by creating its own line of
personal computers.
3.11 VISICALC
The Radio Shack, Atari, Commodore, and Apple computers were designed
primarily for home and educational use and did not appreciably impact the fortunes of
established computer companies like IBM or DEC. However, one software application
in particular propelled personal computers into the business world: VisiCalc, the first
computerized spreadsheet.
At the time, the term “spreadsheet” was already in common use as a paper-based
tool for keeping track of calculations. VisiCalc’s co-creator, Daniel Bricklin, got the idea
for creating an electronic spreadsheet after watching one of his professors at Harvard
laboriously updating a grid of numbers on a chalkboard. Bricklin initially wanted to create
his spreadsheet for DEC minicomputers, but on a whim chose the Apple II instead. The
first version of VisiCalc was released in 1979, and it was instantly popular. It was the
first killer app for personal computers; according to some accounts, people bought an
Apple II just so they could use VisiCalc. Although VisiCalc was surpassed by competing
spreadsheet software just a few years later, it created a new category of business
software that had not existed previously.
74
A screenshot of VisiCalc running on an Apple II computer.
Today, of course, word processors and spreadsheets are both “killer apps” for
personal computers. However, back in the 1970s, word processing was slow to catch
on for personal computers. The most likely reason is that the most popular personal
computers of the era could only display about twenty-five lines of text, at forty characters
per line. That is far less than the amount of text that will fit on a printed page. Thus, one
of the first commercially successful word processors, the Wang OIS (Office Information
System, introduced in 1977), was not simply a program that ran on a personal computer,
but a specialized computer in and of itself.
The Wang OIS had a keyboard, a high-resolution monitor, and a microprocessor
that allowed it to process the user’s commands locally. However, it stored documents
on a centralized server rather than on a local disk. Within a few years, however, the
arrival of more powerful personal computers, coupled with inexpensive word processing
software, like WordStar and WordPerfect, would render custom word-processing
machines obsolete.
75
Gates contacted a programmer named Tim Paterson, who worked at a nearby
company, Seattle Computer Products (SCP). Paterson had written a simple operating
system that was compatible with the Intel 8088, nicknamed QDOS (quick and dirty
operating system). Microsoft first licensed QDOS from SCP and later purchased it
outright, renaming it MS-DOS (“Microsoft Disk Operating System”). MS- DOS formed
the basis of virtually all of Microsoft’s operating systems for the next twenty years.
Microsoft staff in Albuquerque, December 7, 1978. Bill Gates is in the front row to the far left.
From Microsoft’s official site:http://www.microsoft.com/en-us/news/features/2008/jun08/06-25iconic.aspx
Radio Shack, Atari, Commodore, and Apple took notice of the IBM PC’s popularity
and responded by creating more powerful machines. The most prominent of these
competitors was the Macintosh by Apple.
Back in 1979, Steve Jobs visited Xerox PARC and saw a demonstration of the Alto.
Jobs instantly knew that Alto’s mouse-driven point-and-click GUI was the way of the
future, and he formed a team to design a new Alto-inspired computer at Apple. The
resulting machine, the Apple Lisa, released in 1983, was technically brilliant, but—like
Xerox’s own Alto-based Star computer two years earlier—it was far too expensive for
most home users. What Apple needed, Jobs decided, was to build an affordable
computer based on the GUI design: the Macintosh.
76
Steve Wozniak (left) and Steve Jobs with an Apple I circuit board, c. 1976.
To keep costs low, the Macintosh team gave it a far smaller screen than either the
Lisa or the Star had. It had no internal hard disk, just one 3 ½-inch floppy disk drive, and
no expansion slots. As an additional cost-cutting measure, Jobs insisted that the
Macintosh have only 128 kilobytes of memory—a ridiculously small amount for a GUI.
The entire machine, except the external keyboard and mouse, was housed in a plastic
shell, roughly ten inches wide and one foot tall and deep. Its small size made it portable
and, arguably, non-threatening to novice computer users.
Although the first Macintosh had a fast processor, its small 128K memory limited
the complexity of the programs it could run. Later models included more memory, which
greatly improved its usefulness. The Macintosh was not the first computer with a mouse
and GUI, but it was the first to bring this new mode of computing to a wide audience. By
the end of the 1980s, most computers included a mouse.
77
3.14 PC CLONES
Anyone willing to build their own computer could acquire an Intel processor, some
memory cards, and a copy of MS-DOS and theoretically create their own IBM PC. The
only thing standing in the way was the BIOS (Basic Input/Output System). The BIOS
was the software that allowed the operating system to communicate with the hardware.
It was one part of the IBM PC architecture that was uniquely IBM’s, and it was not for
sale. Anyone who copied the code for IBM’s BIOS would be subject to a lawsuit.
However, in 1982, one company called Compaq assembled a team with the task of
creating a program that behaved exactly like BIOS. When it was completed, Compaq
began selling its own “IBM PC-compatible” computer in 1983 at a lower price than a
genuine IBM PC. Another company, Phoenix Technologies, did the same thing. Within
five years of the IBM PC’s introduction, over half of the PCs sold were clones, not actual
IBM machines.
Although the PC-compatible computers sold better than the Macintosh, Apple had
something that Microsoft did not: the GUI. MS-DOS was patterned after CP/M, which in
turn was patterned after DEC’s teletype interfaces for timeshared minicomputers. The
notion of “point and click” was simply not a part of MS-DOS’s ancestry. There were two
possible solutions to this. One was to create an entirely new operating system that
incorporated a GUI. Another option was to create a GUI program that ran atop MS-DOS.
The first option would arguably result in better performance since the new operating
system would support the GUI natively. However, software vendors would have to
rewrite their existing software applications for the new system. The second option would
make the computer run slower, since it essentially amounted to running a new operating
system on top of an old one, but since it was still MS-DOS at the core, existing
applications would be fully compatible with it.
3.16 Mainstream
79
Section III Summary
Most computing prior to the 1970s was batch-oriented, but there were a few exceptions.
l Interactive computing
n Project Whirlwind
u Used display screens rather than paper output
u Capable of displaying graphical shapes, not just text
n SAGE
u Based on Whirlwind technology
u Distributed air defense system for the U.S. Air Force
n SABRE
u American Airlines reservation system
u First non-military application of interactive computing
l ● Timesharing
n Assumption: “computers are fast but people are slow”
n Multiple users on teletypes share a single computer, which switches rapidly between
users’ requests
n Notable examples:
u CTSS at MIT
u BASIC at Dartmouth
l Highly influential programming environment
n Multics
u Over-engineered and over-ambitious
u Delivered incomplete and behind schedule
n Idea of a universal “computer utility” never caught on.
n Timesharing continued, but on a smaller scale.
l ● Digital Equipment Corporation (DEC), creator of the minicomputer
n PDP-8
u First successful minicomputer
u Small enough to fit on a desk, Rather than filling a whole room
u Led to the creation of the OEM industry
n PDP-11
u Most popular minicomputer of all time
u Influences:
l Led to cost-effective timesharing
l Led to the popularization of BASIC language
l Led to the development of Unix
l ● Unix
n Created at AT&T by former Multics programmers
n Built on PDP-7, then on PDP-11
n Written in C programming language
l ARPANET
n First long-distance computer network
n Enabled by store and forward packet switching
80
n Popular applications:
u Email
u Remote login
u File transfer
n Protocols, like TCP/IP allowed different networks to communicate
l Other networks:
n CS Net, BITNET, NSFNET
n Minitel
n ALOHAnet
l Xerox PARC
n Pioneer of many concepts of modern computing
u Graphical user interfaces
u Laser printers
u Ethernet
u Object-oriented programming
n Xerox’s attempts to market its technology largely failed.
l Invention of the microprocessor
n A “computer on a chip”
n Enabled the creation of small, inexpensive computers
l MITS Altair 8800: an inexpensive “personal computer” in 1975
n Led to the ubiquity of Intel hardware and Microsoft software
l 1977: Commodore, Radio Shack, and Apple all release personal computers.
n Eventually, Commodore and Radio Shack would exit the computer industry; Apple
remains.
l Lenovo from China
n founded in Beijing on 1 November 1984
n Liu Chuanzhi and Danny Lui.
n the company changed its English name from Legend to Lenovo
l Video games
n First created for minicomputers
n Then as custom machines in arcades
n Then as software for personal computers
l VisiCalc
n The first spreadsheet
n Led to the acceptance of personal computers by businesses
l IBM PC
n Designed quickly from off-the-shelf parts
n Soon supplanted by inexpensive clones
n The use of MS-DOS led to Microsoft, not IBM, dominating the personal computer
industry.
l Macintosh
n Inspired by GUI research at Xerox PARC
n First affordable GUI/mouse computer
l PC and compatibles dominate the personal computer industry.
l Windows adds a Mac-like GUI to the PC.
81
l Notable pioneers:
n Jay Forrester: leader of Project Whirlwind, the first interactive computer
n John McCarthy: computer scientist who promoted the idea of timesharing; also created
the Lisp programming language
n Fernando Corbató: developer of CTSS, an early timesharing system at MIT
n Kurtz: co-developers of the BASIC programming language
n Kenneth Olsen: founder of Digital Equipment Corporation (DEC), an influential
developer of minicomputers
n Ken Thompson and Dennis Ritchie: co-developers of the Unix operating system
n J. C. R. Licklider: helped to create the ARPANET, the forerunner of today’s Internet
n Leonard Kleinrock, Paul Baran, and Donald Davies: independently developed the
concept of store and forward packet switching for computer networks
n Norm Abramson: creator of ALOHAnet, the first wireless computer network
n Robert Taylor: director of the Xerox PARC research team, which invented many
technologies used in personal computers
n Douglas Englebart: inventor of the computer mouse
n Vint Cert and Bob Kahn: creators of TCP/IP, the protocol on which the Internet runs
n Robert Metcalfe: inventor of Ethernet, a design for high-speed local area networks
n Alan Kay: creator of the Smalltalk programming language, and object- oriented
programming
n Barbara Liskov: creator of the CLU programming language, which along with Smalltalk
helped popularize object-oriented programming
n Robert Noyce: co-inventor of the integrated circuit, and founder of Intel
n Gordon Moore: co-founder of Intel, with Robert Noyce
n Ted Hoff: inventor of the Intel 4004 microprocessor
n Bill Gates: founder of Microsoft
n Paul Allen: an early employee of Microsoft who co-developed a version of BASIC for
personal computers
n Daniel Bricklin: co-creator of VisiCalc, the first computerized spreadsheet program
n William C. Lowe: architect of the IBM PC
n Gary Kildall: founder of Digital Research and creator of the CP/M operating system
n Tim Paterson: creator of MS-DOS
n Steve Wozniak: architect of the Apple II computer
n Steve Jobs: founder of Apple and NeXT and architect of the Macintosh
82
Section III Glossary
83
04
Section IV
The Internet, Social Media, and
Mobile Computing
The 1970s and 1980s laid the foundation for personal computing. Hardware had
become fast and cheap, and software—thanks to the GUI—had become easier to use.
The ARPANET, a vast network of university and government computers, had given way
to the Internet. The commercialization of the Internet would soon lead to an explosion
of popular interest in computers. At the same time, continued miniaturization of
hardware would lead to computers that could fit in one’s hand.
84
4.1 The GNU Project and the Open-
source Movement
In the early 1980s, Richard Stallman was a programmer at the Artificial Intelligence
(AI) Lab at MIT. The AI Lab had a project called the Lisp Machine, an operating system
designed to facilitate writing and running programs in the LISP programming language.
After the Vietnam War, military funding for computer research was reduced, and the AI
Lab had to seek funding through commercial sponsorship. A company called Symbolics
hired members of the AI Lab to develop improvements to the Lisp Machine software—
but these enhancements would be proprietary and could not be shared with others.
Not being able to share program code, however, went against Stallman’s code of
ethics. For years, he had enjoyed sharing data and programs with his colleagues.
Openness was part of the lab culture: most of his colleagues did not even use
passwords on their shared PDP-10 computer, preferring to operate in an atmosphere of
mutual trust. Concerned over these changes in his lab, Stallman resolved to shun the
use of proprietary software from that point on.
85
Programmer and open source software advocate Richard Stallman.
In order to ensure that GNU would remain “free” forever, Stallman enlisted the help
of a lawyer and drafted a new software license, called the GNU General Public License,
or GPL. Other programmers began adopting the GPL for their own projects. Today, the
GPL is used not just by the GNU operating system but by many programs, including
media players, compilers, and video games.
By the late 1980s, the GNU operating system was mostly complete. However, it
was missing perhaps the most important part of any operating system, the kernel. The
kernel is the central part of an operating system, the part that all the other programs
connect to. Stallman and his associates had been working on a kernel for GNU, called
HURD, for years, but it was not yet stable enough for production use.
Linux
Meanwhile, in Finland, a college student named Linus Torvalds was also working
on a Unix-like operating system. Like Stallman, Torvalds did not start writing code from
scratch, but used an existing operating system as a starting point. In this case, Torvalds
used Minix, a Unix-like operating system developed by Andrew Tanenbaum.
Tanenbaum was a computer science professor at Vrije University in Amsterdam,
and he had published the complete source code of the Minix kernel in a textbook he had
written for one of his classes. As a personal project, Torvalds began rewriting the Minix
kernel to improve its compatibility with the Intel 80386 processor and make other
improvements as he saw fit. Over time, all the original Minix code was entirely replaced
with Torvalds’ code.
Torvalds released his kernel, which he called Linux, under the GPL and shared it
publicly.
86
It seemed like a match made in heaven: the GNU project contained all the tools of
a free Unix-like operating system, minus a kernel. Then along comes Linux, a GPL-
compatible kernel. Put them together, and—voilà—you have a complete, free, operating
system. Although today the full operating system is commonly known simply as Linux,
Stallman is (understandably) quick to point out that GNU/Linux is a more accurate name,
as “Linux” technically refers only to the kernel.
Linus Torvalds, the software engineer who created the Linux kernel.
87
4.2 HYPERTEXT
The World Wide Web resulted in the combination of two technologies that both
predate it: hypertext and the Internet. We have already discussed the creation of the
Internet in a previous section of this resource guide; let us now examine the origins of
hypertext.
Ted Nelson, who in the 1960s coined the term “hypertext” and in the 1970s worked on
creating a hypertext system called Xanadu.
The term hypertext was coined by Ted Nelson in the mid-1960s. Nelson defined
hypertext as “forms of writing which branch or perform on request; they are best
presented on computer display screens.”13
Essentially, hypertext allows a user to access one document from within another
document by following hyperlinks embedded into a document. However, Nelson himself
was inspired by an essay titled As We May Think, written in 1945 by Vannevar Bush.
As a proposal for managing the ever-increasing amount of human knowledge, Bush
described a hypothetical machine called the memex that would allow people to quickly
retrieve any one of thousands of documents stored on microfilm. Of course, Bush’s
proposed memex was mechanical rather than electronic, yet it influenced Nelson and
other thinkers who eventually implemented hypertext systems. From the 1970s and
beyond, Nelson worked on creating a hypertext system called Xanadu. Xanadu was a
grandiose vision, a system for organizing, linking, and authoring information. However,
in spite of decades of on-again, off-again work by Nelson and his disciples, Xanadu
never got off the ground.
Xanadu, however, was not the only hypertext project being developed at that time.
Independently of Nelson, programmers at Apple were working on a hypertext system of
their own, called HyperCard. HyperCard was a simple programming environment that
allowed programmers to create virtual “cards”—basically rectangular spaces within a
computer window that could display text, images, or video. When the user clicked within
the rectangle, HyperCard would display a new card with new content. By creating
several cards and linking them together, programmers could create interactive stories,
88
tutorials, and databases. (A best-selling video game from the 1990s, Myst, was based
on HyperCard.)
HyperCard was extremely easy to use, and it developed an enthusiastic following.
Apple bundled HyperCard with every new Macintosh sold upon its introduction in 1987.
Although HyperCard did not work with the Internet—all the cards in a HyperCard
program resided on the user’s local machine—it nonetheless gave many programmers
their first taste of a hypertext system and can rightly be credited with bringing the
concepts of hypertext out of the laboratory and into the hands of everyday users.
To create the World Wide Web, Berners-Lee had to create several different parts
that worked together. First, he needed a way to embed hyperlinks into documents, called
webpages. For this, he created HTML (Hypertext Markup Language). In addition to
hyperlinks, HTML also included special syntax, called tags, for formatting a webpage.
For example, in the following HTML code, <h1> and </h1>mark the start and end of a
section heading, and <p> and </p>mark the beginning and end of a paragraph.
<h1>Chapter 1: The Period</h1>
<p> It was the best of times; it was the worst of times. It was the age of wisdom, it
was the age of foolishness. It was the epoch of belief; it was the epoch of incredulity. It
was the season of Light; it was the season of Darkness. It was the spring of hope, it was
the winter of despair. </p>
Tim Berners-Lee, the computer scientist who created the World Wide Web.
Photo courtesy of Sotheby’s
89
Berners also created the software for the first web server, a program that stores
webpages and transmits them to other computers upon request. For hyperlinks to go
from one computer to another, Berners-Lee needed a way to uniquely identify the exact
location of a document. He did this by inventing the URL (Uniform Resource Locator),
which is essentially an Internet domain name coupled with a Unix-style file path:
https://www.cmu.edu/news/stories/archives/2022/august/summer-science-
research.html
In order to view web pages, Berners-Lee also wrote the first web browser: a
program that connects to a web server, downloads a requested web page, and displays
it on the screen.
90
4.3 Browser Wars
Eric Bina (left) and Marc Andreesen, creators of the web browser Mosaic.
In 1992 and 1993, Marc Andreesen and Eric Bina, while working for the NCSA
(National Center for Supercomputing Applications) at the University of Illinois, created a
new web browser they called Mosaic. Mosaic was far superior to previous web browsers.
Most importantly, it could display both text and images simultaneously, allowing for
much more visually interesting web pages. Mosaic also allowed users to access
hyperlinks with a mouse-click, rather than keyboard shortcuts, making navigating the
Web far simpler for casual users. Finally, Andreesen and Bina wrote versions of Mosaic
that worked on Windows, Macintosh, and Unix. (For the most part, previous browsers
supported Unix only.) Andreesen and Bina made Mosaic available as a free download
on the Internet, and soon, tens of thousands of copies were installed worldwide. Just as
VisiCalc was the “killer app” for personal computers in the late 1970s, Mosaic was the
“killer app” for the Web in the early 1990s.
The World Wide Web was starting to go mainstream. Recognizing the possible
commercial value of the web browser, legendary entrepreneur Jim Clark approached
Andreesen in 1994 and proposed that they form a business to market the Mosaic
browser.
Clark was best known for founding Silicon Graphics in 1983, a company that
produced high-end Unix workstations. The administration of the University of Illinois
objected, saying that the Mosaic browser belonged to the university. Here we see an
91
example of history repeating itself. Back in the 1940s, when the creators of the ENIAC
tried to commercialize their invention, they likewise found themselves at odds with their
sponsoring institution.
“Like the University of Pennsylvania a half-century before it, Illinois saw the value
of the work done on its campus, but it failed to see the much greater value of the people
who did that work.”14
Netscape
Jim Clark (left) and Marc Andreesen, founders of Netscape Communications Corporation,
whose web browser, Netscape Navigator, quickly became the most popular browser.
Andreesen left the University of Illinois and, with Jim Clark, founded Netscape
Communications Corporation. Since the University of Illinois owned the code to Mosaic,
Andreesen wrote a new browser from scratch, incorporating many improvements along
the way. The new browser, christened Netscape Navigator, was released in September
1994. It quickly displaced Mosaic as the most popular web browser.
The presence of a stable, easy-to-use web browser helped spur the growth of the
World Wide Web. By the end of 1994, there were 10,000 web servers online. Six months
later, there were more than 20,000. At the end of 1995, Netscape Navigator had been
downloaded 15 million times. Netscape’s business model was to give away the browser
for free to individual users, but charge corporations a license fee for the browser and
their server software.
Where was Microsoft throughout all this? They were busy at work preparing the
next release of Windows, dubbed Windows 95. It was the first version of Windows that
did not require a copy of MS-DOS to be pre-installed—Windows 95 was an operating
system in its own right. In reality, Windows 95 was still based on MS-DOS, but this fact
was cleverly shielded from users. To Microsoft’s credit, they still managed to make
Windows 95 backward compatible with almost all the software written for MS-DOS since
1981—an impressive feat.
92
Internet Explorer
In any case, the rapid development of the Web apparently caught Microsoft off
guard—Microsoft had been planning on releasing its own (non-Web-based) online
service, MSN, alongside Windows 95. Realizing a bit too late the commercial importance
of having its own web browser, Microsoft licensed the source code of the original
Mosaic browser from a company called Spyglass, whom the University of Illinois had
hired to manage the Mosaic intellectual property. Microsoft called its browser Internet
Explorer. Since it was based on old Mosaic code, Internet Explorer was not as polished
as Netscape.
Thus began the so-called “browser wars.” For the next few years, both Netscape
and Microsoft would frequently release new versions of their web browsers with new
features. In an attempt to out-do each other, each browser would implement its own
nonstandard “extensions” to HTML. Website creators would have to decide which
browser’s extensions to support, and thus their websites would look differently when
viewed in Netscape Navigator or Internet Explorer. Some web pages would even include
little banners stating, “Best viewed in Netscape Navigator” or “Best viewed in Internet
Explorer.” Such incompatibilities were at odds with the sense of cross-platform
openness that the designers of the Internet had espoused.
Although Internet Explorer initially lagged behind Netscape Navigator
technologically, by 1998, Internet Explorer was quite a capable browser. With the
release of its next operating system, Windows 98, Microsoft controversially went beyond
simply bundling Internet Explorer with Windows and attempted to embed the browser
into the core of the operating system—using Internet Explorer for such common tasks
as displaying folders and the desktop background. This was a shrewd business tactic
for Microsoft: first, it allowed Microsoft to avoid paying royalties to Spyglass, whose
license agreement required a payment for every copy of Internet Explorer sold. It also
undermined Netscape’s business model, which depended on sales of its browser.
Why would users pay for Netscape Navigator when they could get Internet Explorer
essentially free? Not surprisingly, Netscape’s share of the browser market plummeted.
Taking inspiration from the success of Linux and other open-source projects, in
1998 Netscape Communications decided to release the source code of Netscape
Navigator and its related tools, in the hopes that volunteer programmers would take up
the baton and lead the beleaguered browser back to glory. This bold move was both a
success and a failure. The resulting product, called Mozilla, was an all-in-one suite of
networking software: a web browser, an email client, a web page authoring tool, address
book, and calendar. Mozilla was bloated and slow and did not appreciably gain any
market share back from Internet Explorer. However, in 2004 Mozilla released Firefox, a
fast, minimalist web browser without all the extra features that had bogged Mozilla down.
With Firefox, Mozilla regained some, but not all, of the market share once enjoyed by its
ancestor Netscape.
93
4.4 Search Engines
Even with a good web browser, finding information on the World Wide Web could
be difficult sometimes.
Unlike a library, the Web has no centralized “card catalog.” Anyone with access to
a web server is free to create their own webpages on any topic, without restrictions on
content or quality. In order to help users find relevant webpages, a number of search
engines were developed. Some of these, like Infoseek and AltaVista, disappeared after
a few years or were absorbed by other companies.
Perhaps the two most successful search engines were Yahoo and Google. Both
were started by graduate students at Stanford University as side projects although their
underlying strategies were, at least initially, quite different.
Yahoo
In 1993, Jerry Yang and David Filo founded Yahoo. They believed that the best
way to point users toward useful content was to create a manually curated list of human-
verified websites. At first, their catalog only had about two hundred sites, but as the Web
grew, so did their indexing efforts. Soon, Yang and Filo had a team of workers surfing
the web to generate lists of sites related to a wide range of topics. The exponential
growth of the Web, however, eventually made the strategy of a human-curated search
engine intractable.
94
Google
In 1997, Stanford graduate students Larry Page and Sergey Brin took a different
route. Rather than manually check which sites were the most relevant, they wrote a
computer program to scour the web and collect lists of websites for them. Page and Brin
called their search engine Google, a misspelling of the very large number googol. The
combination of highly relevant search results and targeted advertising propelled Google
to be the top search engine, a position it has held for years.
The mid-to late 1990s were a time of intense excitement and economic speculation.
Although in reality few web businesses were profitable at first, the hope for quick
fortunes motivated a spurt of entrepreneurial activity in the United States and throughout
the world. New technology companies were created almost daily, often with vague
business plans, backed by venture capitalists who hoped to strike it rich by funding the
next hugely successful web company.
Referred to (often pejoratively) as “dot coms” after the com top-level domain name,
many of these companies had meteoric rises, followed by equally precipitous falls. In
the wake of the contentious U.S. presidential election in November 2000 and the
September 2001 terrorist attacks, the global economy slowed, and investors became
more cautious. Without sustainable business models and less investor funding, many
dot coms folded. Some successful Web companies, notably Facebook, were founded
well after the dot-com craze of the 1990s had passed, but they were fewer and more
careful.
95
4.6 JAVA
The first web pages consisted solely of static (i.e.,non-moving) text and images. By
embedding a Java program into an HTML document, however, web designers now had
the ability to include interactive content, such as animations, games, and video, into a
web page. When a user navigated to such a page, the browser would download the
Java program (called an applet) along with the HTML and display both simultaneously.
Gosling intentionally designed Java to share some of the same syntax of C++, but
he removed many of the more obscure C-inherited relics of the language and greatly
improved its memory management. As a result, many universities adopted Java as their
language of choice for teaching students the concepts of OOP without the frustration of
C++’s idiosyncrasies. Today, Java is still a staple of university-level computer science
education.
Applets were a common feature of many web pages until the early 2000s, when
they were gradually replaced by other technologies, including an improved version of
HTML that supported interactive content directly. Although Java is no longer associated
with interactive web pages, the language itself persisted, and is commonly used today
for (among other things) creating mobile apps for Android devices.
96
4.7 NeXT
Although Steve Jobs had founded Apple and spearheaded the development of its
successful Macintosh product line, disagreements among Apple’s upper management
eventually led to his being fired from Apple in 1985. Jobs soon founded a new company
called NeXT. NeXT created high-end desktop computers housed in a distinctive black
cubic chassis. The NeXT operating system combined the best of both worlds: it was
based on BSD Unix, which made it stable and powerful, but it also had a GUI, making it
easy to use. Despite its power, NeXT’s high price tag slowed its adoption outside of
niche markets. (Significantly, Tim Berners-Lee used a NeXT computer to implement the
first web server and browser at CERN!)
Only about 50,000 NeXT machines were sold, enough to keep the company going
but not enough to be very profitable. By 1992, it was apparent that NeXT—despite its
powerful hardware and operating system— was not doing well. Jobs tried to license the
operating system to other computer manufacturers but without success, and he began
to consider selling the company entirely.
Meanwhile, three different CEOs had presided over Apple since Jobs left, but the
Macintosh as a platform was languishing. By 1996, Apple’s share had sunk to 4 percent
of the personal computer market, compared to 16 percent in its heyday. To its credit,
Apple had produced an impressive range of Macintosh laptops and desktops, but the
development of the MacOS operating system had stalled since the early 1990s.
MacOS was long overdue for an overhaul, but Apple’s attempts to do so had
faltered. Eventually, Amelio and Jobs agreed that Apple would buy NeXT outright, with
Jobs rejoining Apple as a member of the board of directors.
98
4.10 Mobile Computing
The idea of a truly portable computer—one that could be held in one hand and used
while walking—has been in gestation for a long time.
PDAs
One early genre of handheld computers was known as personal digital assistants,
or PDAs. The Psion Organizer II, released in 1986, is generally considered the first PDA.
In 1993, Apple released its first attempt at a PDA, called the Newton. It included a stylus
and had excellent support for handwriting recognition. However, it was bulky and
expensive and never sold very well.
The Palm Pilot garnered a devoted following and was for many Americans their
introduction to mobile computing. By Rama - CC BY-SA 2.0 fr,
https://commons.wikimedia.org/w/index.php?curid=36959631
In fact, one of Steve Jobs’s first acts upon his return to Apple was to discontinue
production of the Newton.
One particularly popular PDA was the Palm Pilot. Like the Newton, the Palm Pilot
included a stylus, but instead of attempting “natural” handwriting recognition, Palm
instead used a simplified gesture-based alphabet called graffiti that users had to learn
in order to input text. Furthermore, users could only enter graffiti on a specific area of
the device, directly below the screen.
In spite of these limitations, the Palm Pilot garnered a devoted following and was
for many Americans their introduction to mobile computing. Importantly, Palm also made
programming tools available to software developers, leading to an extensive library of
third-party apps and games for Palm devices. In 1999, a Canadian company, Research
In Motion, released a PDA called BlackBerry. It was known for its easy-to-use keyboard
and fast wireless email service.
99
Portable GPS Devices
GPS stands for Global Positioning System, and it refers to a cluster of navigation
satellites launched into orbit by the U.S. Air Force in the 1970s. Computers on the
ground can receive signals from the satellites to pinpoint their exact position. When
combined with a digital map and routing algorithms, GPS becomes a powerful
navigation tool. The first GPS-powered navigation computers appeared as an expensive
option on luxury cars in the 1990s.
An original Sony Walkman from 1979. The Walkman was an influential portable music player.
By Binarysequence - CC BY-SA 4.0,
https://commons.wikimedia.org/w/index.php?curid=40687158
Cellular Phones
Cellular phones (also known as cellphones) have a history that stretches back to
the early twentieth century, when researchers began investigating the feasibility of a
wireless telephone network. However, the first commercially available cellular phone
appeared in 1983. The first cellphones received the unflattering nickname brick because
of their large size, but over time they became lighter and slimmer. One popular form
factor was called the clamshell because it folded in half and could easily fit in a pocket.
100
4.11 Smartphones
By the early 2000s, there were many portable computing devices available. There
were PDAs for business use, GPS units for navigation, MP3 players for playing music,
cellphones for communication, and a host of digital cameras and portable video game
players.
BlackBerry
Other companies created smartphones as well. Soon BlackBerry devices moved
beyond email and included cellular telephones. BlackBerry phones gained a loyal
following from their customers, who humorously referred to their devices as
“Crackberries.” The Finnish electronics company Nokia, a longtime manufacturer of
cellphones, began creating smartphones as well. Nokia’s phones predominantly used
an operating system called Symbian, which was derived from the operating system
used by Psion in some of its early PDAs. Microsoft also licensed its Windows Mobile
operating system to a number of smartphone manufacturers.
The iPhone
The rise of the smartphone left the folks at Apple feeling vulnerable. As a result,
Jobs decided that Apple should also create a smartphone. Jony Ive, leader of design at
Apple, persuaded him to use a touchscreen. Unlike other smartphones, the front of
Apple’s phone would consist almost entirely of glass. By making the entire phone a
touchscreen display, the device would be truly all-purpose: it could switch between a
keyboard, an e-book reader, a telephone keypad, or a video player on demand. Apple
released its new phone in 2007 and called it the iPhone, and it quickly set a new
standard for what a smartphone should look like. By the end of 2010, Apple had sold
over 90 million iPhones.
People waiting to buy the iPhone upon its release in New York City, June 29, 2007.
By I, Padraic Ryan, CC BY-SA
3.0,https://commons.wikimedia.org/w/index.php?curid=2323128
101
Huawei
Founded in 1987, Huawei is a leading global provider of information and
communications technology (ICT) infrastructure and smart devices. They have 207,000
employees and operate in over 170 countries and regions, serving more than three
billion people worldwide. The company is committed to bringing digital technology to
every person, home, and organization for a fully connected, intelligent world.
The Company and its subsidiaries principally provide end-to-end Information and
Communications Technology solutions. This includes the research, design,
manufacture, and marketing of telecom network equipment, IT products and solutions,
cloud technology and services, digital power products and solutions, and smart devices
for telecom carriers, enterprises, and consumers.
Huawei overtook Ericsson in 2012 as the largest telecommunications equipment
manufacturer in the world. Huawei surpassed Apple and Samsung in 2018 and 2020
respectively to become the largest smartphone manufacturer worldwide.
Customers line up to make an appointment to buy Huawei's Mate 60pro mobile phone, which
supports 5G network, at Huawei's flagship store in Shanghai, China, August 30, 2023.By
www.alamy.com
Android
Only one company offered a viable alternative to the iPhone. It was Google. In 2005,
Google acquired a company called Android, which was creating a new operating system
for smartphones. Google licensed Android to a number of manufacturers, most notably
Samsung. Within just a few years, Apple and Android came to dominate the smartphone
scene.
App Stores
One of Apple’s and Android’s strengths is their “app stores.” Traditionally, users
purchased software either as a shrink-wrapped product at a store or by downloading it
from the website of the software vendor. However, this requires the user to go out and
find the software.
Taking a cue from Apple’s iTunes music store, the Apple App Store and its Android
equivalent, Google Play, instead provide a centralized repository for software
distribution. In exchange for hosting the app store, Apple and Google retain a
percentage of all app sales and in-app purchases. This may be an inconvenience for
large software firms that already have established channels of distribution for their
102
software. However, for small independent app developers, a centralized app store
greatly lowers the barriers to entry into the software business, as they no longer need
to create their own storefront websites or compete for limited space on a retail store’s
shelf.
4.12 Web 2 .0
For the first decade of the World Wide Web, there was a clear delineation between
viewing and creating web pages. Perhaps the earliest example of this was the weblog
or blog: a type of website that allowed users to post content easily without managing a
server or knowing HTML. The online encyclopedia Wikipedia is another example of a
website that is as easy to edit as it is to read.
Social Media
Journalists coined a term for this new type of interactive website, where publishing
is as common as reading: Web 2.0. The most visible aspect of Web 2.0 is the advent of
social networking websites. Two of the first social networking services to gain
prominence were Friendster and MySpace, both launched in 2003. However, one
troubling aspect of early social networking services was the lack of authentication:
there was no way to prevent anyone from creating one or more fake profiles. Both
Friendster and MySpace were soon eclipsed by another social network: Facebook.
Facebook
In 2003, Mark Zuckerberg was a freshman studying computer science at Harvard
College. During his first semester, he created two social websites at Harvard: Course
Match (comparing course schedules) and Facemash (rating people based on physical
attractiveness). After taking a course on graph theory, Zuckerberg decided to create an
electronic version of the “facebook,” a printed student directory published by Harvard.
Registering the domain name thefacebook.com and using an off-campus hosting
service, Zuckerberg released his creation in February 2004.
103
Mark Zuckerberg, creator of Facebook, photographed in 2005.
By Elaine Chan and Priscilla Chan – CC BY 2.5,
https://commons.wikimedia.org/w/index.php?curid=1779625
Instagram
As smartphone use increased, it was not long before social networks like Facebook
adapted to take advantage of the unique abilities of smartphones, especially their built-
in cameras and geolocation features. For example, users could take photos on their
phone and upload them to Facebook instantly, along with the precise location where the
photo was taken. Another social network, Instagram, was first made available as a
smartphone app in 2010 and only later ported to web browsers. It proved so popular
that it was acquired by Facebook in 2012.
WeChat
WeChat or in Chinese Weixin is a Chinese instant messaging social media
and mobile payment app developed by Tencent. First released in 2011, it became the
world's largest standalone mobile app in 2018, with over 1 billion monthly active users.
WeChat has been described as China's super-app because of its wide range of
functions. WeChat provides text messaging, hold-to-talk voice messaging, broadcast
(one-to-many) messaging, video conferencing, video games, mobile payment, sharing
of photographs and videos and location sharing.
Weibo
Weibo is a Chinese microblogging website. Launched on 14 August 2009, it is one
of the biggest social media platforms in China, with over 582 million monthly active
users (252 million daily active users) as of Q1 2022. The platform has been a huge
financial success, with surging stocks, lucrative advertising sales and high revenue and
total earnings per quarter. At the start of 2018, it surpassed the US$30 billion market
valuation mark for the first time.
104
TikTok
TikTok, or in Chinese Douyin, is a short-form video hosting service owned by
Chinese internet company ByteDance. It hosts user-submitted videos, which can range
in duration from three seconds to 60 minutes. It can be accessed with a smartphone app.
Since its launch, TikTok has become one of the world's most popular social
media platforms, using recommendation algorithms to connect content creators with
new audiences.
In April 2020, TikTok surpassed two billion mobile downloads
worldwide. Cloudflare ranked TikTok the most popular website of 2021,
surpassing Google. The popularity of TikTok has allowed viral trends
in food and music to take off and increase the platform's cultural impact worldwide.
4.13 Tablets
After the initial success of the iPhone and Android phones, the next natural step
was to extend the smartphone’s intuitive touchscreen interface to a new category of
device: the tablet. The iPad by Apple, introduced in 2010, was essentially an iPhone
with a larger screen but without phone capabilities. Samsung and others countered with
similar Android-branded devices. The term tablet was likely borrowed from an older
product, the “tablet PC,” which was a Windows laptop that supported a stylus and
handwriting recognition software.
Not every tablet runs Apple and Android software. Microsoft has attained a fair
amount of success in marketing a Windows-powered tablet, the Microsoft Surface. The
Surface Pro tablet comes with a detachable keyboard and is a formidable alternative to
a laptop computer.
105
To create apps for iPhone and iPad devices, software developers usually have to
use one of two programming languages: Objective C or Swift. Both of these languages
are virtually unknown outside of Apple programming. In contrast, Android apps are
typically written in Java. This was a strategic decision: by using such a well-known
language for app development, Google can tap into the huge base of programmers who
already know Java, thus lowering the learning curve for those wishing to create apps for
Android.
Oracle, a company best known for its database software, acquired Sun
Microsystems in 2009. As discussed earlier, the Java programming language was
created at Sun Microsystems in the early 1990s. Shortly after the acquisition, Oracle
contended that Google had not properly paid Oracle for Google’s use of Java on the
Android platform, and Oracle demanded billions of dollars of royalty payments from
Google. Oracle’s contention was primarily about Google’s copying the Java API
(application programming interface). An API is a set of programming instructions that
one organization makes available to another to facilitate the development of software.
Oracle claimed that Google had violated Oracle’s copyright; however, Google claimed
that its use of the API constituted fair use.
The case made it all the way to the United States Supreme Court, which ruled in
favor of Google. This was perceived as a good thing by many in the IT industry, even
by Google’s competitors. Had the court decided differently, this may have limited the
ability of software developers to write programs for different computer models and
operating systems.
Hosting
With cloud computing, large organizations such as IBM, Microsoft, and Amazon
build huge data centers with thousands of networked computers. They then allow other
companies to lease time on their computers for a monthly fee. The data center takes
care of maintaining the computers and backing up data, spreading the costs among all
the subscribers. This model is especially beneficial for small businesses, which may not
have the expertise (or space) to operate their own web server on-site. Some large
businesses have likewise opted to outsource their IT operations to the cloud. For
example, in 2016 the popular streaming video service Netflix shut down its own data
center and has been using Amazon Web Services (AWS) for its infrastructure ever since.
Software as a Service
The advent of cloud computing gave rise to another business model, known as
Software as a Service (SaaS). With this approach, users pay a subscription fee and
106
access software through a web browser rather than installing and running programs on
their personal computers. The application software itself and the users’ data are stored
in the cloud. One of the earliest examples of this was Salesforce.com, which launched
in 1999. Salesforce.com’s first product was a “customer relationship management”
application that allowed traveling salespeople to keep track of customers and
appointments from their laptops, without burdening the home office with synchronizing
sales data stored on multiple computers and keeping software updated.
Web-based email services such as Hotmail (later rebranded as Outlook.com) and
Gmail are more common examples of the software as a service model. Google’s very
popular suite of web-based office tools, such as Google Docs and Google Sheets, are
also examples of SaaS. Microsoft 365 is the cloud-based equivalent to Microsoft’s
traditional Office suite and is a competitor to Google Docs. Adobe, known for its image-
editing and desktop publishing tools, like PhotoShop and Acrobat, now also offers these
tools as subscription-based SaaS versions.
4.16 Blockchain
Blockchain is a fairly new technology with many potential uses. Blockchain is,
essentially, an online ledger. A ledger is just a book containing a list of financial
transactions. A ledger could also be used to “show status, such as citizenship or
employment; confirm membership; show ownership; track the value of things.”15 One
aspect of blockchain that makes it different from traditional ledgers is that each entry
contains a “hash” (a sequence of bytes) derived mathematically from the contents of the
previous entry. This means that entries in a blockchain cannot be deleted or modified
without compromising the integrity of the entire ledger. This allows for complete
transparency when making transactions.
Bitcoin
Blockchain is most closely associated with a related technology, bitcoin. Bitcoin is
a form of digital currency not associated with any government. It uses a blockchain to
track the ownership of bitcoins. Every purchase or sale of bitcoin is recorded in a
blockchain. Bitcoin is perhaps the best-known use of blockchain, but that is all it is—one
application. Blockchain itself is a general-purpose technology.
Both bitcoin and blockchain were invented by Satoshi Nakamoto, who posted a
paper describing bitcoin on the Internet in 2008. Mysteriously, Satoshi Nakamoto’s
identity is not publicly known, nor have they posted any public communication online
since 2011. The name itself is likely a pseudonym. Whoever Satoshi is, they have not
been forthcoming with their real identity.
NFTs
Non-fungible tokens, or NFTs, are another application of blockchain technology. A
“token” in this context, means a digital certificate of ownership. “Non-fungible” means
107
non-replaceable, or unique. In contrast, money is fungible—one dollar has the same
value as any other dollar; if two people exchange dollar bills, they both end up with the
same amount of money as before. However,
…the token in an NFT represents a certificate that attests to the authenticity,
uniqueness, and ownership of a digital object, such as an image, a music file, a video,
an online game, or even a tweet or a post on major social media. In this sense, the NFT
indisputably identifies the digital property of such an object.16
Thus, an NFT is a unique digital artifact; it can be bought and sold but not duplicated.
The ownership of any given NFT is verified on a blockchain. There are accounts of
artists selling NFTs of their digital artwork for millions of dollars, which understandably
generate a lot of media hype. However, will the momentum last? Or are NFTs just a
speculative bubble whose importance will fade? Only time will tell.
Artificial Intelligence (AI) has been an active area of computer science research
since at least the 1950s. As mentioned previously in this resource guide, the old
programming language LISP has long been associated with the study of AI, and the
founder of the GNU project, Richard Stallman, worked in this area. Over the past seven
decades, however, interest in AI has waxed and waned. Part of this is due to a failure
to deliver on the original lofty goals of AI visionaries. Science fiction from the 1960s,
such as the film 2001: A Space Odyssey and the television series Star Trek, reflected
the common assumption that computers would soon be able to converse freely with
humans. However, computers have yet to achieve human-like intelligence, nor are they
likely to in the foreseeable future.
Machine Learning
Nevertheless, the past two decades have seen a resurgence in interest in AI.
Researchers have moved away from the traditional approach of simulating formal logic
on a computer to an approach known as machine learning (ML). In ML, algorithms
analyze enormous amounts of data to find patterns and then apply those patterns to
new inputs. For example, voice-activated personal assistants such as Apple’s Siri and
Amazon’s Alexa do not necessarily “understand” users’ commands. Rather, their
algorithms (greatly simplifying) do something like this: The user just said X. In my
training data, input X correlated to action Y most of the time, so I will perform action Y.
ML techniques are also used in the design of “chatbots” such as ChatGPT. ChatGPT
and similar tools employ vast amounts of training data, allowing users to engage in text-
based chats with the system on a variety of subjects.
Robotics
The idea of a humanoid robot has long fascinated technologists and science fiction
authors. Some limited- use robots have found acceptance in specialized applications
108
like food preparation, health care, and the military. However, the era of general-purpose
robots is not yet here. In 2021, Elon Musk announced that Tesla is working on a line of
intelligent robots; time will tell how successful these turn out to be.
109
Section IV Summary
l The GNU project popularized the open-source movement, aided by the Internet.
n When combined with the Linux kernel, GNU became a stable and inexpensive platform for
the rapid growth of the Internet.
l The World Wide Web was a natural combination of two existing technologies: hypertext and
networking.
n Early Web browsers were text-only. Mosaic was the first to combine text and images.
n Netscape displaced Mosaic as the dominant Web browser, which was in turn displaced by
Internet Explorer.
n The Web led to a flurry of entrepreneurial activity, known as the “dot com bubble.” Without
meaningful business plans, many startups failed.
l Java is a programming language that appeared around the same time as the World Wide Web
and found a niche use for creating applets in web pages.
n Java is still a popular language today.
l Apple bought NeXT, bringing Steve Jobs back.
n The iMac became the first in a new line of Apple computers, bringing new competition to
the “Wintel” platform.
n NeXT’s OS became MacOS X, bringing Unix to the mainstream.
l Microsoft spent much of the 1990s in a lawsuit over the Windows monopoly.
l Computers became handheld devices.
n PDAs allowed for remote work.
n GPS allowed for easy navigation.
n MP3 players allowed for music everywhere.
n Cellular phones allowed for communication.
n “Smartphones” combined all of the above into one!
l App stores provide a unified storefront for buying software.
l Social media is the most prominent example of “Web 2.0”—making it just as easy to post
content as it is to read it.
n Social media, combined with smartphones with cameras and geolocation, led to
applications such as ride sharing.
l Top Social Media Platforms in China
n WeChat
n Weibo
n TikTok
l Tablets were a natural outgrowth of smartphones. With an attached keyboard, tablets are
almost indistinguishable from laptops.
l Oracle sued Google over Google’s use of Java APIs in Android. The Suprem Court ruled in
favor of Google, meaning APIs are not subject to copyright protection.
l Cloud computing is today’s version of timesharing.
n Web hosting and “software as a service” have finally made timesharing cost effective.
l COVID-19 pandemic made us more reliant than ever on computer networks.
n Many workers had to telecommute; for some, it may become the new normal.
110
l New technologies to watch out for:
n Blockchain: the enabling technology behind both bitcoin and NFTs; highly speculative at
this point
n Artificial Intelligence: This is an old field of study, but machine learning has rejuvenated it.
n Quantum Computing: storing bits of information in atoms instead of electricity; it’s mostly
theoretical at this point, but it could result in drastically faster computers in the future.
l Notable Pioneers:
l Richard Stallman: founder of the GNU project, popularized the open source movement
l Linus Torvalds: creator of the Linux kernel, which is the “heart” of the GNU/Linux operating
system
l Andrew Tanenbaum: creator of the Minix operating system, which provided the inspiration for
Linux
l Ted Nelson: coined the word “hypertext”
l Tim Berners-Lee: inventor of the World Wide Web
l Marc Andreesen and Eric Bina: co-developers of Mosaic, the first graphical web browser
l James Gosling: creator of the Java programming language
l Steve Jobs: returned to Apple in 1997 and led the company back to profitability
l Jony Ive: designer at Apple responsible for the iMac, iPod, and iPhone
l Mark Zuckerberg: creator of Facebook
l Gordon Moore: co-founder of Intel, creator of “Moore’s Law”
l Satoshi Nakamoto: inventor of bitcoin and blockchain;
111
Section III Glossary
1. gestation—the process by which a new idea, piece of work etc. is developed, or the period
of time when this happens
2. clamshell—having a cover or other part that opens and shuts like the shell of a clam
3. delineation—the act of describing, drawing or explaining something in detail; the
description, drawing or explanation itself
4. authentication—the act of proving that something is real, true or what somebody claims it
is likely a pseudonym
5. tablet—a handheld touchscreen computer, generally larger than a smartphone; the most
prominent example is the iPad, introduced in 2010
6. stylus—a special pen used to write text or draw an image on a special computer screen
112
Conclusion
The history of computing is a vast topic, and there are a number of ways to
approach it. In this resource guide, we have opted for a traditional treatment of the
subject, beginning with the origins of computing in office automation and the Industrial
Revolution. Gradually, mechanical devices gave way to electromechanical, and then to
purely electronic machines. We have traced the story of electronic computing from its
genesis in World War II and then followed the careers and fortunes of the people and
companies that nurtured the development of computers over the following decades.
Over time, improvements in technology have yielded ever faster devices with ever
smaller footprints: the mainframe, the minicomputer, the personal computer, the
smartphone. This does not imply that newer categories necessarily replace the older
ones: for example, today’s smartphones certainly coexist with personal computers. Less
obviously, mainframes and minicomputers are still being produced today, albeit for
specialized applications.
Historians Thomas Haigh and Paul Ceruzzi liken the computer to a “universal
solvent”:
That invokes a mythical liquid able to dissolve any substance. A very partial list
of the technologies dissolved by electronic computers would include adding machines,
mechanical control systems, typewriters, telephones, televisions, music recording
systems, paper files, amusement machines, film-based cameras, videotapes, slide
projectors, paper maps, letters, and 32 newspapers.
Moreover, Haigh and Ceruzzi predict that the computer may one day dissolve
itself—that is, the computer may become so integrated into other devices that we no
longer regard it as a separate unit. Already, today’s televisions and video players “still
exist as distinct classes of machine but their internal workings have been replaced with
computer chips running specialized software.” 17
Perhaps the most surprising example of this phenomenon is the automobile. Indeed,
Elon Musk, the CEO of Tesla, has stated that his team “designed the Model S to be a
very sophisticated computer on wheels.”18 Few of us consider our cars to be computers,
but that is, in fact, what they have become. Computers have become such an integrated
part of our daily life that they are both “everywhere and nowhere”19 at the same time.
In spite of the computer’s omnipresence, it is nonetheless fundamentally just a tool.
And like any tool, it simply magnifies the effort of whoever wields it. Thus, computers
can affect society for better or for worse. A computer can spread and amplify societal
ills like pornography, racism, or fake news. On the other hand, a computer can also
connect us with friends and family, promote commerce, and increase opportunities for
education. In this resource guide, we’ve looked at how computing has shaped society
in the past. How it will shape the future is up to you.
113
NOTES
1. See Reckoners: The Prehistory of the Digital Computer by Paul Ceruzzi, page 28. Ceruzzi opines:
“So for a moment, then, Schreyer cracked open the door to an awesome and strange new world,
but that door slammed shut before he could pass through. Only after the war did a now-divided
Germany enter the electronic computer age, but as a follower, not a leader.”
2. Quoted in ENIAC: The Triumphs and Tragedies of the World’s First Computer by Scott McCartney,
page 23.
3. Quoted from A History of Modern Computing (2nd edition) by Paul Ceruzzi, page 84.
4. This example is adapted from Computer: A History of the Information Machine (3rd edition) by
Martin Campbell-Kelly et al., page 172.
5. This example is adapted from Computer: A History of the Information Machine (3rd edition) by
Martin Campbell-Kelly et al., page 175.
6. Quoted from A History of Modern Computing (2nd edition) by Paul Ceruzzi, pages 93-94.
7. Quoted in Computer Architecture: A Quantitative Approach (3rd edition) by John Hennessy and
David Patterson, Appendix F.
8. Quoted in The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (1995)
by Frederick P. Brooks, Jr., page 25.
9. See, for example, Design Patterns: Elements of Reusable Object-Oriented Software (1995) by
Erich Gamma et al., and The Pragmatic Programmer (1999) by Andrew Hunt and David Thomas.
10. See, for example, the books BASIC Computer Games (1978) and its sequel, More BASIC
Computer Games (1979) by David H. Ahl, which contain source code listings of many games
originally written for the PDP-11.
11. Quoted from Computer Networking: A Top-Down Approach Featuring the Internet (3rd Edition)
by James F. Kurose and Keith W. Ross, page 56.
12.https://www.guinnessworldrecords.com/world-records/72695-most- computer-sales
13. Quoted in A History of Modern Computing (2nd edition) by Paul Ceruzzi, page 301.
14. Quoted from A History of Modern Computing (2nd edition) by Paul Ceruzzi, page 303.
15. Quoted from Blockchain: the Next Everything (2019) by Stephen P. Williams, page 15.
16. Quoted in Non-Fungible Tokens (NFTs): Examining the Impact on Consumers and Marketing
Strategies (2022) by Andrea Sestino et al., page 12.
17. Ibid.
18. Quoted in A New History of Modern Computing (2021) by Thomas Haigh and Paul Ceruzzi, page
411.
19. Quoted from A New History of Modern Computing (2021) by Thomas Haigh and Paul Ceruzzi,
page 408.
114