Histroy of Computer Generation
Histroy of Computer Generation
• 1M Neurons
• 256M Synapases
• Real Time
• 73mW
So, What is Computer Architecture?
Book
9
Syllabus
1. Introduction, History of Computing,
3. Instruction Pipelining- Pipeline hazards- Overcoming hazards- Instruction set design and pipelining- Parallelism Concepts
– Dynamic Scheduling – Dynamic hardware branch prediction.
4. Multi-core, Super scalar, VLIW and vector processors – compiler support for ILP – extracting parallelism – speculation –
performance.
5. Centralized shared memory architectures, Distributed shared memory architectures –synchronization – memory
organisation and cache coherence issues.
Introduction
17
Functional components of a computer
• Basic functional units of a computer – Five functionally independent
main parts.
18
History of Processors
20
Generations of Electronic Computers
• The first generation (1945-1954) used vacuum tubes and relay memories interconnected by insulated wires.
• The second generation (1955-1964) was marked by the use of discrete transistors, diodes, and magnetic ferrite
cores, interconnected by printed circuits.
• The third generation (1965-1974) began to use integrated circuits (ICs) for both logic and memory in small-scale
or medium-scale integration (SSI or MSI) and multilayered printed circuits.
• The fourth generation (1974-1991) used large-scale or verylarge-scale integration (LSI or VLSI). Semiconductor
memory replaced core memory as computers moved from the third to the fourth generation.
• The fifth generation (1991 present) is highlighted by the use of high-density and high-speed processor and
memory chips based on even more improved VLSI technology.
• For example, 64-bit 150-MHz microprocessors are now available on a single chip with over one million
transistors. Four-megabit dynamic random-access memory (RAM) and 256K-bit static RAM are now in
widespread use in today's high-performance computers.
21
Evolution of Computer Architecture
Flynn's Classification
• Flynn’s Classification is the most popular taxonomy of computer
architecture, proposed by Michael J. Flynn in 1966 based on number
of instructions and data stream.
• Instruction Stream: it is defines as the sequence of instructions
executed by the processing unit.
• Data stream: it is defined as the sequence of the data including
inputs, partial or temporary results called by the instruction stream.
Flynn's Classification
• These System have one sequential incoming data stream and one
single processing unit to execute the data stream. They are just like
uniprocessor systems having parallel computing architecture.
Advantages of SISD
•It requires less power.
•There is no issue of complex communication protocol between multiple cores.
Disadvantages of SISD
•The speed of SISD architecture is limited just like single-core processors.
•It is not suitable for larger applications.
•Example: Single CPU workstations, Minicomputers, Mainframes, IBM 7001 are SISD computers
SIMD
• such kind of systems would have multiple incoming data streams and number of processing units
that can act on a single instruction at any given time. They are just like multiprocessor systems
having parallel computing architecture.
Advantages of SIMD
•Throughput of the system can be increased by increasing the number of cores of the processor.
•Same operation on multiple elements can be performed using one instruction only.
•Processing speed is higher than SISD architecture.
Disadvantages of SIMD
•There is complex communication between numbers of cores of processor.
•The cost is higher than SISD architecture.
Normal multiprocessor uses the MIMD architecture. These architectures are basically used in a number of
application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling,
communication switches, etc.