Until recently, it was common practice to divide the history of computers by period or
generations, depending on the technology used to build them. The rapid advances in current technologies,
however, made this impractical. Still, a survey of computer history by generations will be very
educational.
Zeroth Generation
Authors generally disagree on which point in time a narrative of the history of computers should
begin. As early as 3000 BC, the Chinese is said to have used an early form of the bead-and-wire abacus.
In 876 BC, the use of the symbol for zero was first recorded in India (PCWS, 2000: 39). It is said that the
basic idea of computing developed in the 1200s when a Moslem cleric proposed solving problems with a
series of written procedures (Golden Ink, 2000).
Edmund Gunter of England invented the slide rule, precursor of the calculator, in 1620. Blaise
Pascal invented the first mechanical calculator in 1642 using gears with teeth to store digits. When a gear
rotated past the tooth representing the digit 9, the next gear to the left shifted one tooth, or digit (Mandell,
1979: 22). The calculator can perform the work of six accountants, but the initial public reaction was not
overwhelming (PCWS, 2000: 39).
The German mathematician Gottfried Wilhelm von Leibnitz unveiled his invention, a fourfunction calculator that can also calculate square roots, in 1694. He built upon the ideas of Pascal and
used the binary representation of numbers. “It is unworthy of excellent men to lose hours like slaves in
the labor of calculation which could safely be relegated to anyone else if machines were used,” says
Liebnitz.
In 1801 a Frenchman, Joseph-Marie Jacquard builds a loom that weaves by reading holes
punched on small sheets of hardwood. These plates are inserted into the loom, which reads the pattern and
creates the weave. Powered by water, this machine came 140 years before the development of the modern
computer (Golden Ink, 2000), which would use the Jacquard concept that data could be stored in punched
cards.
In 1822, Charles Babbage designed the difference engine, a machine that can execute complex
computations and print results without human intervention. It was originally intended to automatically
calculate mathematical tables, which at that time were being made manually, thereby resulting in many
errors and inaccuracies. The difference engine used the method of finite differences and iterations to
compute complicated functions. It can compute mathematical tables with results of up to five significant
digits. However, Babbage could not complete his project when he tried to build a larger model. The
components he needed to meet tolerance for accuracy could not be produced (Mandell, 1979: 22). The
difference engine, however, was sufficiently developed by 1842 that Ada Lovelace used it to
mechanically translate a short written work. She is generally regarded as the first programmer (Golden
Ink, 2000).
In 1832, Babbage’s brilliant mind came up with another project, the first computer driven by
external instructions (PCWS, 2000: 40) called the analytical engine. It could do the four fundamental
mathematical operations, i.e. addition, subtraction, multiplication, and division, and store immediate
results in a memory unit. It was to have an arithmetic unit to perform computations, a punched-card input
system, and an external storage unit (Mandell, 1979: 24). Due to lack of funding, however, and possibly
because Babbage was way ahead of his time, he never built the steam-driven analytical engine.
In 1854, Cork University mathematics professor George Boole published his thoughts on
symbolic logic with his An Investigation of the Laws of Thought. Decades later, this will form the basis of
computer science applications, and hence Boole is generally recognized today as the father of computer
science. Sir Charles Wheatstone in 1857 introduced the continuously feeding paper tape that can be used
for storing and reading data.
The US Bureau of Census commissioned Herman Hollerith of the Massachusetts Institute of
Technology to build a tabulating machine in the 1880s. Hollerith showed that by putting census data on
punched cards and having a machine that will do the sorting, 250 cards could be sorted in one minute,
greatly reducing census tabulation time from ten years to two-and-a-half. In 1896, perhaps emboldened by
the success of his machine in the 1890 census Hollerith founded the Tabulating Machine Company to
commercially manufacture and market punched card equipment. 15 years later, the Tabulating Machine
Company would merge with others to form the Computing Tabulating Recording Company, renamed the
International Business Machines or IBM in 1924.
Dorr E. Felt introduced the depressable keys in 1885 with his four-function calculator, the
Comptometer.
In 1897, Karl Braun develops the cathode ray tube (PCWS, 2000: 41), which would become a
major factor in the manufacture of computer monitors or video displays.
Konrad Zuse, a German construction engineer, built a mechanical calculator in 1935. Shortly
after completion, Zuse started work on a programmable electronic device called the Z-1. It was the first to
use binary codes, completed in 1938.
At this point, claims and counterclaims of having first created what would blur any narrative.
While in one account the Americans would claim to be the first to develop this type of computer, another
narrative would show that the Germans had it first. The British also had their share of story. The
following narrative, thus, would take facts as told by various sources, without purposely asserting any
claims. Besides, there was no global patenting scheme during those times yet, making it easy to ascertain
which really came first.
As a concrete example, John Atanasoff of Iowa State University, aided by graduate student
Clifford Berry, began a pioneering work in 1937 on the first digital electronic computer which years later
will become the basis of the world’s first general-purpose electronic computer. The Atanasoff-Berry
Computer or ‘ABC’ used vacuum tubes instead of electrical relays to carry out computations, solving
linear equations common in physics. Atanassof did not take a patent for his work, expecting the
University lawyers to do it. However, the administration did not take interest in his work, which was
cannibalised by undergraduate students. His work would be archived until 10 years later. In the same
year, Georges Stibitz developed the first binary circuit at the AT&T Bell Laboratories.
Yet in another account, a Russian immigrant named Vladimir Zworykin invented the CRT in 1928 (HCS, 1996).
In 1941, Zuse developed the Z-3, which was based on relays and was very sophisticated for its
time. It utilized the binary number system and could handle floating-point arithmetic. Zuse had
considered employing vacuum tubes but chose relays instead because they were more readily available. In
1943, he started work on a general-purpose relay computer called the Z-4, which would survive the
bombs of World War II hidden in a cave in the Bavarian Alps. It will be up and running in a Swiss bank
by 1950 (Maxfield, 1998).
In 1944, Harvard University professor Howard Aiken completed the Harvard Mark I in
conjunction with IBM engineers. Harvard Mark I, officially known as the Automatic Sequence Controlled
Calculator or ASCC, was based on standard IBM electromechanical parts. It used electromagnetic relays,
switches, mechanical counters, shafts, and clutches. Described as sounding like a roomful of ladies
knitting, the machine contained more than 750 000 components, was 50 feet long, 8 feet tall, and weighed
approximately 5 tons. It could add or subtract 23-digit numbers in three-tenths of a second, multiply them
in four seconds, and divide them in ten seconds (Maxfield, 1998). It provided vital calculations to the
Navy in World War II, and until 1953 would be used to calculate artillery and bombing trajectory tables.
First Generation
Characterized by mechanical and electromechanical technologies, zeroth generation computers
are slow and unreliable. The inertia of moving parts limits computing speed, while gears and levers make
information processing cumbersome. The birth of the electronic age led to the abandonment of gears,
levers, counter wheels, and the like in favor of the new vacuum tube technology.
The first generation computers used the vacuum tube as primary component. These computers
were quite large, requiring considerable space. Moreover, the vacuum tubes were expensive. They
frequently break down, making costs higher. These tubes also produce enormous amounts of heat.
Airconditioning is generally required.
First generation computers had relatively small main memories, usually in terms of a few
thousands of characters. Processing speed is at the millisecond level, while the IO media used are
punched cards and paper tapes.
John Mauchly and John Presper Eckert of the University of Pennsylvania Moore School of
Engineering developed the Electronic Numerical Integrator And Calculator or ENIAC in 1946. It had
thirty separate units, plus power supply and forced air-cooling. It weighed over thirty tons, and utilized 19
000 vacuum tubes, 1 500 relays, 70 000 resistors, 10 000 capacitors, and 500 000 solder joints, including
thousands of inductors, consuming almost 200 kilowatts of electrical power (). The ENIAC was originally
part of the war effort, designed to compute ballistic trajectories very quickly, boasting 5 000 additions or
subtractions per second. However, it came too late for the war and was used instead to solve equations to
prove the theory behind the atom bomb, which it did in 2 hours as contrasted to the estimated 100 manyears of conventional computation required to do the same task. A 10-digit multiplication would take
Harvard Mark I around 3 seconds to perform, while ENIAC could do it in 3 milliseconds (Del Rosario,
1996: 2).
In 1948, the Small-Scale Experimental Machine or SSEM, also known as the Manchester Mark I,
made its first successful run of a program. Created by Frederic Callan Williams and Tom Kilburn at the
University of Manchester, it was the first machine that had all the components now classically regarded as
characteristic of the basic computer. It had a a RAM of just 32 locations or words, each word consisting of
32 bits. Thus the machine had a grand total of 1024 bits of memory.
The EDSAC or Electronic Delay Storage Automatic Computer was completed at Cambridge
University led by Maurice Wilkes in 1949. It was the first operational stored-program computer. Instead
of wired control panels, instructions stored in the computer itself controlled the operation of the machine
(Mandell, 1979: 27). The EDSAC introduced the concept of memory hierarchies, having a primary
memory of 1024 words and a drum of 4600 words. The EDVAC or Electronic Discrete VAriable
Computer was completed in 1951 by John von Neumann, credited with the introduction of the stored
program concept, wherein data and instructions reside in the same storage. Actually, EDVAC was started
ahead of the EDSAC, but the former has been crippled with the departure of some of the people working
on the project to form their own enterprise.
The first commercial electronic computer became available in 1951, developed by Eckert and
Mauchly. It was known as the UNIVAC I or the Universal Automatic Computer. The UNIVAC brought
more attention and interest to the computer when it correctly predicted the results of the US presidential
election in favor of Dwight D. Eisenhower. Opinion polls predicted a landslide victory for Adlai
Stevenson.
In 1952, the MANIAC at was built at Los Alamos Scientific Laboratory, along with the ILLIAC
at the University of Illinois, the JOHNIAC at Rand Corporation, the SILLIAC in Australia, and others.
John von Neumann and his colleagues completed the IAS bit-parallel machine in 1952 for the Institute of
Advanced Studies at Princeton University, New Jersey. It became the prototype for the modern computer,
incorporating the five functional units of input, output, memory, ALU, and control. It introduced random
access main memory, and implemented the fetch-decode-execute instruction cycle. The IAS machine
architecture would later become an industry standard and named the Von Neumann architecture.
Second Generation
The shift from the use of vacuum tubes to discrete transistors and diodes marked the second
generation, generally regarded as the era when the computer industry was born. The invention of the
transistor, according to the American Institute of Physics, came on 17 November 1947, when Bell
Laboratories scientist Walter Brattain dumped his whole experiment into a thermos of water. Brattain
wanted to get rid of the condensation forming on the silicon contraption he built, and although putting the
silicon in a vacuum would have been desired, he knew it would take too much time. He dumped it instead
in water, and was surprised to observe the largest amplification he had so far seen. The rest of the story is
history.
The transistor is smaller than the vacuum tube, costs less and is more reliable. Power
consumption is smaller, and thus heat dissipation is likewise less. During this period, the magnetic tape
emerged as the major IO medium, main memory capacities are less than a hundred kilobytes, and
computing speed goes to the microsecond level.
In January 1954, engineers from the Bell Labs built the first computer without vacuum tubes.
Called the TRADIC or TRAnsistorized DIgital Computer, the machine was a mere three cubic feet, dwarfed by the 1 000-square-foot ENIAC. TRADIC had around 800 point-contact transistors and 10 000
germanium crystal rectifiers. It could do a million logical operations every second, and operated on less
than 100 watts of power. TRADIC was 20 times faster than vacuum tube computers.
A lot of innovations in the use of computers marked the second generation. High-level languages
were developed, such as the FORmula TRANslator (1954-57), the ALGOL (1958), COBOL (1959), and
the PL/1 (1963). Moreover, operating systems were also introduced during this period, but were initially
limited to providing services required for batch processing. The use of separate IO processors and the
development of compilers led to improvements such as having the OS handle all IO, the introduction of
utilities, and elaborate loaders.
Third Generation
Fairchild Semiconductor filed a patent application for the planar process of manufacturing
transistors in 1959. Fairchild’s Robert Noyce constructed an integrated circuit with components
connected by aluminum lines on a silicon-oxide surface layer. Two years later, Fairchild released its first
commercial integrated circuit.
In 1958, Jack Kilby of Texas Instruments, working independently, developed the idea of creating
a monolithic device on a single piece of silicon. His first integrated circuit contained five components on
a piece of germanium half an inch long and thinner than a toothpick (Polsson, 2000).
The introduction of the integrated circuit, which is composed numerous transistors manufactured
as a single unit, ushered in the third generation. The third generation computer is characterized by SSI and
MSI technologies. SSI or small-scale integration allows one to ten logic gates packed in a single IC, or
around 50 to 100 transistors in one package, while MSI or medium-scale integration allows up to a
hundred gates in one IC, or around a thousand transistors in a single package. The IC offered increased
reliability, faster processing capability, and advantages of smaller size and less power dissipation.
Tom Watson Jr’s 5,000,000,000-dollar gamble, as dubbed by Fortune magazine, was the most
influential third generation computer. The IBM System 360,a family of computers with six upward
compatible models, was introduced in 1964. Customers can choose from 5 processors and 19
combinations of power, speed, and memory. It boasts of up to one million words of memory and a new
32-bit CPU with 16 general registers, 4 double length registers for floating point operations, and an
extensive instruction set. By the end of the decade, S/360 computers are used all over the world.
The S/360 was a success, and the industry reaction was varied. Some tried to clone the S/360 and
sell at lower price, while others capitalize on the fact that the S/360 was not compatible with previous
IBM machines. Products were developed to attract IBM customers who wanted to avoid the difficulties of
conversion and porting to the new S/360. Still others tried to develop entirely different lines of machines
but which were more powerful and cost-effective than the S/360. However, not all vendors survived.
Software developments in this generation included the support of multi-programming, timesharing, and virtual memory.
Fourth Generation
In around 1970, advances in semiconductor technology saw an increase in gate densities of up to
several thousands per IC. Termed as the LSI or large-scale integration, it has made possible the
fabrication of a CPU, or even an entire computer, on a single IC. Today’s ¼-square-inch Pentium III
microprocessor, for example, is more powerful than the room-sized ENIAC.
The fourth generation is characterized by LSI and VLSI IC technologies. VLSI stands for very
large-scale integration, which allows the packing of millions of transistors in a single IC. This period is
marked by an increased complexity and miniaturization of the IC, rapid increases in processing capacity
and speed and the inversely proportional cost of information processing, easier to use software, an
astounding increase in the software and hardware sales volume, and an insatiable demand for qualified
computing personnel.
In 1971, Intel Corporation unveiled the first commercially available microprocessor, the /6” x 1/8” 4004. The 4-bit processor contained 2 200 transistors, had 45 instructions, and was almost as powerful as
the ENIAC. In the same year, Intel released the 8-bit 8008. Its variant, the 8080A was released in 1973
and became an industry standard. Other vendors came up with their own 8-bit microprocessors, such as
the Zilog Z80, developed by former Intel engineer Masatoshi Shima, the Motorola 6800, and Intel’s own
8085.
The microcomputer revolution began with the introduction of the personal computer in 1975.
MITS or Model Instrumentation Telemetry Systems, a small struggling company in Albuquerque, New
Mexico, developed the Altair 8800 computer, the world’s first microcomputer based on the Intel 8080
processor. Named after one of the stars in the popular TV series Star Trek, the Altair 8800 was sold by
mail order for US$397. MITS proprietor Ed Roberts was able to negotiate with Intel to sell him the 8080
processor for US$75 instead of the going price of US$36. Overnight, the MITS financial position
turned around. It must be noted that the Altair 8800 was sold in kit form and unassembled. It would only
have 256 characters of memory, no keyboard, no monitor, and programs are written in machine language
(Delaney, 1995).
The first 16-bit microprocessor was developed by National Semiconductors in 1975, called the
PACE. In 1985, the same company introduced the first full 32-bit microprocessor, the NS32032, which
had the speed and power of a mini. Likewise, Intel packed over 275 000 transistors in its 16-bit 80386
microprocessor, which could execute three to four million instructions per second. Four years later, Intel
unveiled the 80486, which contained 1 200 000 transistors and could execute 20 million instructions per
second. Digital Electronics Corporation introduced the first 64-bit processor in 1993, the DEC Alpha. The
64-bit Intel Pentium processor, released in the same year, had 3 100 000 transistors and was five times
faster than its predecessor, the 80486. It could execute 100 million instructions per second.
In 1995, the state-of-the-art VLSI technology was the 0.35 micron feature size. It means that
around 4 million transistors can be packed on a single IC. In 1998, it was 0.25 micron or 7 million
transistors in a single IC.