Control Keys

move to next slide (also Enter or Spacebar).
move to previous slide.
 d  enable/disable drawing on slides
 p  toggles between print and presentation view
CTRL  +  zoom in
CTRL  -  zoom out
CTRL  0  reset zoom

Slides can also be advanced by clicking on the left or right border of the slide.

Contents

  • Mechanical Calculating Aids
  • Mechanical Calculator Machines (from 1600)
  • Relay-based Computers (from 1941)
  • Computers based on Electron Tube
  • Transistor-based Computers (from 1955)
  • Computers with Integrated Circuits (from 1965)
  • Microprocessors (from 1971)

Mechanical Calculating Aids (Abacus), ca. 500 B.C.

abacus
Abacus
Image source: Abakus, Flickr user: zakulaan, Creative Commons License
  • An abacus is a mechanical device that simplifies counting and serves as a memory aid for simple calculations
  • It consists of a frame and several rods on which stones or beads are lined up in a movable fashion
  • It has been used for thousands of years, e.g. for trading on the markets

Japanese Abacus (Soroban), ca. 1600

soroban
Heaven (counts 5)
Earth (counts 1)
Reckoning bar
  • Each column/rod represents a digit from 0 to 9
  • The beads below the reckoning bar count as one, and the beads above count as five
  • Beads are counted when they are pushed against the reckoning bar
Image source: Abacus, Flickr user: Whity, Creative Commons License, modifiziert

Japanese Abacus (Soroban)

  • This is an interactive Soroban simulator
  • Click on the beads to move them

Japanese Abacus (Soroban)

  • Example: 1823 + 2333 = ???
  • soro_1823
    Step 1: Enter 1823 (see image)
  • Step 2: Addition of 2333
    • Proceed from left to right, one after the other for each digit
    • Perform addition by simply adding beads
  • If no matching beads are available, the 5's or 10's complement can be used, for example:
    • 5's complement of 3 is (5 - 3) = 2, i.e. instead of adding 3 beads, add one beads from the heaven and remove two from the earth
    • 10's complement of 3 is (10 - 3) = 7, add a beads from the next higher digit and remove a total of seven beads from earth and heaven

Napier's Bones, 1617

john_napier
John Napier (1550 - 1617)
  • In 1617, the Scottish mathematician John Napier described a calculating aid that can be used to convert a multiplication (or division) into an addition (or subtraction)
  • For this purpose, the multiplication table is written on rods
  • The rods are placed on a base board so that the number to be multiplied is in the top row
  • The multiplication of the selected number by the factors 2 to 9 can be read in the lines below
  • The values are read from right to left by adding up the numbers within the resulting parallelograms
  • An eventual carry-over has to be taken into account
Image source: John Napier, public domain

Napier's Bones

  • This is an interactive simulator of Napier's bones
  • Clicking on the individual rods will change their value

Napier's Bones

Example: napierbones_example

Mechanical Calculating Machines (from 1600)

rechenmaschine_schickard
Original drawing of the "calculating clock"
  • Wilhelm Schickard (1592-1635), a professor in Tübingen, built the first gear-driven calculating machine in 1623
  • On September 20, 1623, he wrote to Johannes Kepler:

    "I recently tried mechanically to do the same thing you did by calculation, and I built a machine consisting of 11 complete and 6 mutilated wheels, which automatically adds, subtracts, multiplies and divides given numbers in an instant.

    You would laugh out loud if you were there and could see how, whenever it goes over a ten or a hundred, the digits on the left automatically increase or decrease."
Image source: Original drawing by Wilhelm Schickard, public domain

"Calculating Clock" by Schickard, 1623

rechenuhr_schickard
"Calculating Clock" by Schickard (Replica from 1957)
Image source: Wilhelm Schickard machine replica, Flickr user: Daniel Sancho, Creative Commons License

"Calculating Clock" by Schickard, 1623

  • The calculating clock by Schickard was capable of automatic addition and subtraction, including automatic transfer of tens (carry)
  • The transfer of tens was achieved by means of a toothed wheel construction
  • Napier's bones (top) were used for multiplication, and the values had to be transferred manually to the mechanical addition mechanism (bottom)

Difference Engine by Charles Babbage, from 1822

difference_engine
Part of the Difference Engine No. 1
  • "What are you dreaming about? - I am thinking that all these tables of logarithms might be calculated by machinery."
  • In 1822, the English mathematics professor Charles Babbage began work on a mechanical calculating machine for calculating polynomials using Newton's difference method
  • Unfortunately, he never completed the machine (despite considerable financial support from the British government until 1842).
  • The motivation for this was that the written tables used at the time were often inaccurate because they had been produced manually in monotonous calculations
  • To avoid copying errors when transferring the results, Babbage had even provided for a printer in a second version, "Difference Engine No. 2"

Difference Engine by Charles Babbage

difference_engine2
  • Charles Babbage made only a design drawing for his 1849 "Difference Engine No. 2"
  • It was not until 1991 that a working model was built at the Science Museum in London (weighing about five tons).
Image source: Let the computing begin!, Flickr user: Jitze Couperus, Creative Commons License

Analytic Engine by Charles Babbage

charles_babbage
Charles Babbage
  • In addition to his work on the Difference Engine, Charles Babbage also described a universally applicable mechanical "Analytic Engine" in 1842
  • It already had many of the components of today's general-purpose computers (including separation of memory and arithmetic units, loops, conditional jumps, etc.)
  • He was way ahead of his time
  • However, due to the unforeseeable costs, the British government did not want to finance the construction
Image source: The Late Mr. Babbage, The Illustrated London News, 4 November 1871, public domain

Relay-based Computers

  • By using electromechanical relays, computers can be designed much more easily than with pure mechanics
  • In 1941, the German computer pioneer Konrad Zuse built a fully functional computer out of 2200 relays
working method relays
Connections for
control voltage
Coil
Movable armature
Contacts
(open)
Contacts
(closed)
  • A relay is an electrically controlled on/off switch
  • No current through coil = armature not magnetically attracted = contact is open
  • Current through coil = armature is attracted by magnetic field = contact is closed
  • There are only two states for each switch: on/off (or 1/0)
  • To this day, computers use the binary system
Image source: Schematic representation of a relay, public domain

Logical Operations using Relay Circuits

relay-based-logic

Z3 by Konrad Zuse, 1941

Z1
The Z1 in the parents' living room
Konrad_Zuse_mit_Z3
Zuse with Z3 replica
  • Based on his experiences with the mechanically operating Z1 (1935 to 1938), Konrad Zuse built a test model of a relay-based computer in 1939: the Z2
  • In 1941, he built the Z3, the first relay-based fully functional computer with a clock speed of 5.3 hertz
  • Relay technology had been used in telecommunications for quite some time at this point
  • The Z3 had a memory (1600 relays) and a control and arithmetic unit (600 relays).
  • There were nine commands: input, output, read memory, write memory, multiply, divide, extract root, add, and subtract. The commands could be entered directly via a control panel with a keyboard and numeric display or programmatically using punched tape
  • Demonstration of the Z3 in the "Deutsches Museum"
Image source: Deutsches Museum, Free for publication only with this note

Harvard Mark I, 1944

Harvard Mark I
Image source: Harvard Mark I, public domain

Harvard Mark I, 1944

Harvard Mark I Detail
  • The Harvard Mark I was a relay-based computer designed by Harvard professor Howard H. Aiken
  • Aiken began work on the “Automatic Sequence Controlled Calculator” in 1939, which was completed in 1944 with the support of IBM and was named the "Harvard Mark I"
  • The commands could be programmatically transferred using punched tape
  • The calculator was 15.5 meters long and weighed almost 5 tons
  • IBM had already gained a lot of experience with relay technology and punch cards through its accounting and tabulating machines
  • Aiken designed his computer at the same time as Zuse and was not familiar with his work due to the lack of exchange during the Second World War
  • In contrast to Zuse, Aiken used decimal arithmetic
Image source: Harvard Mark I Computer by Rocky Acosta, Creative Commons License

Computers based on Electron Tubes

  • Electron tubes can also be used as switches (triode). They achieve switching times that are 1000 to 2000 times faster than the best relays.
triode
  • In a vacuum, electrons are emitted from the heated cathode and are attracted by the strong positive charge of the anode
  • No voltage at the control grid = current flows
  • Negative voltage at the control grid = no current flows
  • This means that there are two states again (just like with a relay): on/off (or 1/0)

ENIAC, the first entirely electronic computer, 1946

ENIAC
"Electronic Numerical Integrator and Computer" (ENIAC)
Image source: Eniac, U.S. Army Photo, public domain

ENIAC, 1946

Eniac Operators
Eniac Operators
  • ENIAC was developed from 1942 by John W. Mauchly and Persper Eckert at the Moore School of Electrical Engineering, University of Pennsylvania, and presented to the public in 1946
  • It took up an entire room, weighed about 30 tonnes and consisted of 17,468 electron tubes, 7,200 diodes, 1,500 relays, 70,000 resistors and 10,000 capacitors
  • Thanks to the electron tubes, the ENIAC was faster (approx. 1000 Hz) than relay-based computers, but it had the following disadvantages:
    • Very high power consumption (174,000 watts)
    • The tubes break quickly (average service life of about two years, which meant that on average, a tube was broken every hour). The modular design made it possible to replace entire modules.
  • Programming was done by rewiring the device, which meant that the ENIAC was not very flexible. Its main task was to calculate ballistic tables for the U.S. Army
Image source: Eniac, U.S. Army Photos, public domain

UNIVAC I, 1951

  • Eckert and Mauchly founded the "Eckert-Mauchly Computer Corporation" to commercialise their invention
  • The successor to the ENIAC was the UNIVAC I (UNIVersal Automatic Computer)
  • The UNIVAC was memory-programmable and could store data on magnetic tape (12,800 characters per second)
  • After being taken over by Remington Rand, 46 systems were sold
UNIVAC
UNIVAC I computer in the "Deutsches Museum", Munich
Image source: Univac 1, by Jordi Marsol, Creative Commons License

IAS (von Neumann computer), 1952

von Neumann
John von Neumann
  • At the Institute for Advanced Study (IAS) in Princeton, John von Neumann developed a computer based on electronic tubes that, unlike ENIAC, worked in binary code
  • As early as 1944, von Neumann and Eckert and Mauchly had described a concept for a universal computer (initially only in theory)
Bildquelle: John von Neumann, public domain

Von Neumann Computer Architecture

The von Neumann computer architecture concept states:

  • The structure of the computer is independent of the problem to be solved
  • There are five functional units: control unit, arithmetic/logic unit, memory unit, input and output device
von-neumann_architecture

Von Neumann Computer Architecture

  • Commands and data are binary-coded and stored in a joint memory
  • The memory is divided into equally-sized cells that can be addressed with consecutive numbers
  • Instructions that are stored in succession in the memory are executed in succession
  • Jump commands can change the order of execution
  • The memory (data and instructions) can be modified by the machine
  • The concept is still popular because the programming is quite simple due to the strictly sequential process (nothing happens in parallel).

Transistor-based Computers

  • Electron tubes were unreliable and have been replaced by a new electronic component since the early 1950s: the transistor
transistor
transistor
  • The field-effect transistor shown here consists of p- and n-doped semiconductor material. There are three metal contacts: source, drain and gate
  • The current between source and drain is controlled by the voltage between gate and source.
  • 0 volts at the gate = the electrons cannot overcome the p-doped region = no current flows
  • 5 volts at the gate = electrons accumulate below the gate (n-channel) = electrons flow through channel = current flows
  • This means that there are two states again: on/off (or 1/0)

TRADIC, 1955

  • TRADIC (TRansistorised Airborne DIgital Computer) was developed by AT&T Bell Labs for the US Air Force and went into operation in 1955
  • Instead of electron tubes, around 700-800 individual transistors were used
  • In addition to the probability of failure and the reduces size, the power consumption was also reduced to approx. 100 watts.
  • The computing speed was already around one million logical operations per second (1 MHz)

IBM 1401, 1959

ibm_1401
ibm_card
  • IBM 1401 was a large computer designed for processing mass data (census, accounting, customer data, etc.) in large companies or government institutions
  • A total of more than 10,000 units were built
  • The logic circuits were built from individual PCBs with wired components (transistors, capacitors, and diodes).
  • The basic configuration had a punched card reader (on the left in the figure) and a printer (right). Several magnetic tape units could be connected (transfer speed 41,000 characters per second).
  • The IBM 1401 could be programmed with the high-level programming language FORTRAN, among others
Image source: Basic IBM 1401 system, public domain, IBM Standard Modular System card, by Marcin Wichary, Creative Commons License

Computers with Integrated Circuits (from 1965)

dip16pin
  • Instead of manufacturing individual transistors as discrete components, many transistors are integrated on a single piece of semiconductor material (integrated circuit, IC)
  • In 1958, Jack Kibly of Texas Instruments succeeded in producing the first integrated circuit
  • A few years later, the ICs were ready for the market. The first computer system to use them commercially was the IBM /360 computers.

IBM /360, 1965

ibm_S360
  • Introduction of a family concept: all computers in a family are compatible
  • Idea: All machines have the same machine instruction set. The implementation on different physical hardware is done by microprogramming
  • A microprogram specifies how the individual logic modules are to be controlled when executing a particular machine command (e.g. addition)
  • IBM /360 Model 85 was the first commercial system to use a cache (fast local memory that stores a copy of the main memory data)

Intel 4004: the first microprocessor by Intel, 1971

intel4004
dip16pin
  • Intel (Integrated Electronics Corporation) was founded in 1968 by Gordon Moore and Robert Noyce
  • In November 1971, Intel's first microprocessor, the 4004, came into the market
  • The IC was manufactured with a process size of 10 micrometres. It had 2250 transistors and initially operated at a clock rate of 108 KHz
  • The 4004 has a 4-bit data bus, i.e. only 4 on/off states (= 4 bits) can be read per bus clock
  • However, one command was coded as a sequence of 8 ones/zeros, e.g. 01101000. Therefore, the data bus worked twice as fast in order to be able to read one command per processor cycle

Intel 8080, 1974

altair_8800
Altair 8800 with Intel 8080 CPU
  • In April 1974, Intel introduced the 8080, which many consider to be the first truly usable microprocessor
  • The Intel 8080 and its predecessor, the 8008 (1972), were 8-bit computers, i.e. 8 bits (= 1 byte) could be processed within one clock cycle
  • The 8080 has an 8-bit data bus and a 16-bit address bus. This made it possible to address (2 to the power of 16) = 65536 bytes in the external memory
  • The IC was manufactured with a process size of 6 micrometres. It had 4,500 transistors and operated at a clock speed of 2 MHz
  • From 1975, hobbyists were able to order the low-cost Altair 8800 home-assembly computer kit, based on the 8080
  • This marked the arrival of the computer in the homes of technology enthusiasts:
    The personal computer (PC) was born
Bildquelle: MITS Altair 8800b, public domain

Intel 8748 Microcontroller, 1976

  • In 1976, Intel developed an 8-bit microcontroller, i.e. a complete computer (processor, memory and input/output devices) integrated to a single IC
  • With this, Intel launched the MCS-48 microcontroller family (picture shows Intel 8749)
intel_8749

Apple II, 1977

apple II
  • The Apple II was comparatively low-priced and was the first personal computer to become widespread. The first big commercial success for the founders of Apple: Steve Wozniak and Steve Jobs
  • Interestingly, the Apple II's blueprints were made public, meaning that other manufacturers could expand on it – but also replicate it
  • In addition to text, the Apple II was already capable of displaying color graphics: either 15 colors at low resolution (40 × 48 pixels) or 6 colors at high resolution (280 × 192 pixels)
  • Try it yourself: Apple II Emulator
Image source: Apple II, by Marcin Wichary, Creative Commons License

Intel 8086, 1978

altair_8800
IBM 5150 with Intel 8086 CPU
  • The 8086 is a 16-bit processor from Intel
  • The Intel 8086 was manufactured with a process size of 3 micrometers, had 29,000 transistors and operated at a clock speed of 5 to 10 MHz
  • The x86 microprocessor architecture, named after the 8086, would later become an industry standard, mainly because IBM started using a later version of the processor, the Intel 8088, in their PCs in 1981
  • The IBM PC was a huge success. It was also copied many times, with compatible PCs built with the same components
  • Consequently, the x86 microprocessor architecture has seen widespread use
Image source: IBM PC, by Marcin Wichary, Creative Commons License

Intel x86 CPU Family

Name Date Clock speed Transistor
count
Addressable
memory
Notes
8086 6/1978 5-10 MHz 29000 1 MiB 16-Bit-CPU
80286 2/1982 6-20 MHz 134000 16 MiB
80386 10/1985 16-33 MHz 275000 4 GiB 32-Bit-CPU
80486 4/1989 25-50 MHz 1,2 M 4 GiB 8K-Cache
Pentium 1993 60-233 MHz 3,1 M 4 GiB two pipelines
Pentium Pro 3/1995 150-200 MHz 5,5 M 4 GiB two cache levels
Pentium II 5/1997 233-400 MHz 7,5 M 4 GiB MMX (SIMD)
Pentium III 2 1999 450-600 MHz 9,5 M 4 GiB SSE (SIMD)
Pentium 4 2/2000 1,3-2,0 GHz 42 M 4 GiB three cache levels
Pentium 4 Prescott 2/2004 3,8 GHz 125 M 4 GiB 64-Bit-CPU
Core 2 Duo 7/2006 2*(1,8-3,2) GHz bis zu 410 M 4 GiB 2-core processor
Core 2 Quad 1/2007 4*(2,5-3,2) GHz bis zu 820 M 64 GiB 4-core processor
Core i7 (1st Gen) 9/2009 6* (2,8-3,9) GHz ca. 1 G 16 TiB 6-core processor
Quelle: Wikipedia

Moore's Law

  • "The complexity for minimum component costs has increased at a rate of roughly a factor of two per year. Certainly over the short term this rate can be expected to continue, if not to increase. Over the longer term, the rate of increase is a bit more uncertain, although there is no reason to believe it will not remain nearly constant for at least ten years" - Gordon E. Moore, 1965
  • In 1975, Moore changed his statement and predicted that the number of transistors on a microchip would double every two years (sometimes the literature also speaks of a doubling every 18 months).
  • The exact time span is not important in this statement. What is important is that the number of transistors is growing exponentially
  • Microchip manufacturers are trying to maintain this exponential growth (a self-fulfilling prophecy), although it has often been predicted that Moore's Law would end due to technological limitations
  • Since 2006, clock rates have no longer increased significantly, but several processing units (cores) have been placed on one chip
Source: Gordon E. Moore, “Cramming More Components onto Integrated Circuits,” Electronics, pp. 114–117, April 19, 1965.

Number of transistors per microprocessor IC

transistor_count
Source: Transistor count, Wikipedia

Commodore 64, 1982

c64
c64 screen
  • Commodore 64 (C64) was an 8-bit computer from Commodore International that sold approximately 17 million units
  • It was very easy to use and was found in many children's rooms in the 80's as a "game computer"
  • In addition to Assembler programming, the C64 could be programmed with a BASIC interpreter that was read from ROM (Read-Only-Memory) at startup
  • Try it yourself:
    C64 Online Emulator
    C64 JavaScript Emulator
Image source: Commodore 64, public domain

Apple Macintosh, 1984

  • In 1984, Apple introduced the Macintosh with a mouse and a graphical user interface.
macintosh
Image source: Macintosh, by Marcin Wichary, Creative Commons License

MIPS R2000, 1986

  • The MIPS R2000 is an example of RISC architecture
  • As the machine instructions of the established architectures became more and more involved and complex, a countermovement arose in the 1980s
  • Computers with the traditional instruction sets were referred to as:
    CISC (Complex Instruction Set Computer)
  • A new computer architecture was proposed:
    RISC (Reduced Instruction Set Computer)
  • The simple RISC instructions are faster to execute and take roughly the same time each
  • The simple instructions use dedicated hardware and replace the microprogramming that is common in CISC processors
  • The RISC processors can therefore be run at a faster clock speed and the pipelining of instruction sequences becomes more efficient
  • Motorola 68000, PowerPC (Apple), ARM (Smartphone) are further examples of RISC processors

Pipelining

  • Multiple tasks are processed simultaneously with multiple resources
  • In the MIPS architecture, for example, dedicated hardware for: Instruction Fetch (IF), Instruction Decode (ID), Execute (EX), Memory Accesss (MA), and Write Back (WB)
  • Therefore, pipelining can achieve up to five times the speed of sequential execution
Pipelining
Instructions
Pipeline Clock

Amiga 500, 1987

Image source: Amiga 500, by Dave Jones, Creative Commons License

iMac G3, 1998

iMac
iMacModern
  • Especially since the iMac, Apple has placed a lot of emphasis on design
  • Apple's idea: the iMac integrates the monitor and computer hardware into a single housing, which is not usual. This prevents cable clutter on the desk, but has the disadvantage that components cannot be easily replaced.
  • The concept is still in use today (top: iMac G3 from 1998; bottom: iMac from 2007)
  • The iMac G3 had a 233 MHz PowerPC CPU with 512 KB cache, 4 GB hard disk, 32 MB RAM, 2 MB video RAM and came with Mac OS 8.1
Image source: iMac G3, by Rudolf Schuba, iMac 2007, by Burgermac, Creative Commons License

iPhone, 2007

iPhone
iPhone 4, 2010
  • Apple's iPhone revolutionized the mobile phone market by using a multi-touch screen instead of buttons
  • This allows for a novel and simplified user interface
  • A Samsung 32-bit RISC ARM-1176 processor with 667-MHz was used as the microcontroller
  • The phone had 128 MB DRAM, a 3.5-inch display with a resolution of 320x480 pixels and a 2-megapixel camera.
  • This means that the first generation of the iPhone was more powerful than the iMac when it was introduced 10 years earlier
  • Since then, smartphones and tablet PCs (such as the iPad, 2010) have become increasingly popular
Image source: iPhone, Flickr User: Yutaka Tsutano; Creative Commons License

AMD Bulldozer (8-core CPU), 2011

Intel's and AMD's current high-end desktop processors

Intel-Core-i7
Intel Core i7 CPU
(1. Generation)
  • Intel and AMD's current high-end desktop processors have impressive technical data (as of Oct. 2022)
  • For example, the AMD Ryzen 9 7950X has:
    • a process size of 5 nanometers
    • 13.1 billion transistors
    • 16 cores
    • 4.5 GHz to 5.7 GHz clock frequency
    • 80 MB cache (level 2 + 3)
  • the Intel Core i9-13900K has:
    • a process size of 10 nanometers
    • unknown (roughly estimated at around 25 billion)
    • 24 cores
    • 3.0 GHz to 5.8 GHz clock frequency
    • 68 MB cache (level 2 + 3)

Computing power of CPUs and graphics cards (GPUs)

cpu_vs_gpu
Computing power of CPUs compared to GPUs
Source: Based on the CUDA C Programming Guide version 6.5

The trend is towards portable computers

laptop_vs_desktop
Year
Number of laptop and desktop computers in use [in millions]
Source: Worldwide PC market, Computer Industry Almanac Inc., Image source: Laptop, by Aaron Patterson, Creative Commons License

Are there any questions?

questions

Please notify me by e-mail if you have questions, suggestions for improvement, or found typos: Contact

More lecture slides

Slides in German (Folien auf Deutsch)