0% found this document useful (0 votes)
11 views

Histroy of Computer Generation

The document provides an overview of computer architecture, including its definition, history, and evolution through various generations of electronic computers. It discusses key concepts such as instruction set architecture, the distinction between computer organization and architecture, and Flynn's classification of computer architectures. Additionally, it highlights advancements in processor technology, including IBM's brain-like SyNAPSE chip and the shift towards multi-core processors.

Uploaded by

rahul.20243228
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

Histroy of Computer Generation

The document provides an overview of computer architecture, including its definition, history, and evolution through various generations of electronic computers. It discusses key concepts such as instruction set architecture, the distinction between computer organization and architecture, and Flynn's classification of computer architectures. Additionally, it highlights advancements in processor technology, including IBM's brain-like SyNAPSE chip and the shift towards multi-core processors.

Uploaded by

rahul.20243228
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Computer Architecture

Introduction and History of Computing


Processors
Caches
• SSD and FLASH
Others
Latest in Processor
Brain Chip (IBM)

• The human brain can process information much more


quickly and efficiently than a regular silicon chip – so IBM
has created a powerful chip that works like a brain.

• The SyNAPSE chip is the first production-scale neurosynaptic


chip, and features 5.4 billion transistors and 4,096 cores.

• That represents one million programmable neurons, with


256 programmable synapses, capable of processing 46
billion synaptic operators per second per watt.

• 1M Neurons
• 256M Synapases
• Real Time
• 73mW
So, What is Computer Architecture?
Book

1. Advanced Computer Architecture:


Parallelism, Scalability and Programmability
by Kai Hwang

2. Computer Organization and Design, The


Hardware/Software Interface by
Patterson and Hennessey,

3. Microprocessor Architecture: From Simple


Pipelines to Chip Multiprocessors by Jean-Loup
Baer

9
Syllabus
1. Introduction, History of Computing,

2. Fundamentals of computer Design, Performance related issues- Performance Parameters-Measuring Performance-


Instruction Set Architecture Design – compiler related issues.

3. Instruction Pipelining- Pipeline hazards- Overcoming hazards- Instruction set design and pipelining- Parallelism Concepts
– Dynamic Scheduling – Dynamic hardware branch prediction.

4. Multi-core, Super scalar, VLIW and vector processors – compiler support for ILP – extracting parallelism – speculation –
performance.

5. Centralized shared memory architectures, Distributed shared memory architectures –synchronization – memory
organisation and cache coherence issues.
Introduction

• Computer architecture refers to those attributes of a system visible


to a programmer.
• Those attributes that have a direct impact on the logical execution
of a program.
• Computer Architecture is Instruction Set Architecture (ISA).
• ISA defines instruction formats, instruction opcodes, registers,
instruction and data memory.
• Effect of executed instructions on the registers and memory.
• An algorithm for controlling instruction execution.
• Architectural attributes includes
• Instruction set
• the number of bits used to represent various data types (e.g.,
numbers, characters),
• I/O mechanisms and techniques
• Addressing memory
• Whether a computer will have a multiply instruction -
Architectural design issue.
Computer Organization Vs Architecture

• Computer organization refers to the operational units and their


interconnections that realize the architectural specifications.
• Organizational attributes includes
Hardware details transparent to the programmer
• Control signals
• Interfaces between the computer and peripherals
• Memory technology used
• Whether that multiply instruction will be implemented by a special
multiply unit or by a mechanism that makes repeated use of the add
unit of the system - Organizational issue.
• Organizational decision may be based on
• the anticipated frequency of use of the multiply instruction
• the relative speed of the two approaches
• cost and physical size of a special multiply unit
• Many computer manufacturers offer a family of computer models, all
with the same architecture but with differences in organization.
• Architecture – Long term (span for many years)
• Organization – Short term (changing with technology)
• Example - IBM System/370 architecture (1970)
• Customer with modest requirements – prefer Cheaper and slower model
• Customer with highest requirements – prefer Expensive and faster model
Technology constantly on the move!
• All major manufacturers have announced and/or
are shipping multi-core processor chips
• Intel talking about 80 cores
in not-to-distance future
• 3-dimensional chip technology
• Sandwiches of silicon
• “Through-Vias” for communication
• Number of transistors/dice keeps increasing
• Intel Core 2: 65nm, 291 Million transistors!
• Intel Pentium D 900: 65nm, 376 Million Transistors!
Computer Architecture’s Changing Definition

• 1950s to 1960s: Computer Architecture Course: Computer Arithmetic


• 1970s to mid 1980s: Computer Architecture Course: Instruction Set
Design, especially ISA appropriate for compilers
• 1990s: Computer Architecture Course:
Design of CPU, memory system, I/O system, Multiprocessors,
Networks
• 2000s: Multi-core design, on-chip networking, parallel programming
paradigms, power reduction
• 2010s: Computer Architecture Course: Self adapting systems? Self
organizing structures?
DNA Systems/Quantum Computing?
Functional components of a computer
• Basic functional units of a computer – Five functionally independent
main parts.

17
Functional components of a computer
• Basic functional units of a computer – Five functionally independent
main parts.

18
History of Processors

Evolution of Intel microprocessors speeds


Generations of Electronic Computers

20
Generations of Electronic Computers
• The first generation (1945-1954) used vacuum tubes and relay memories interconnected by insulated wires.

• The second generation (1955-1964) was marked by the use of discrete transistors, diodes, and magnetic ferrite
cores, interconnected by printed circuits.

• The third generation (1965-1974) began to use integrated circuits (ICs) for both logic and memory in small-scale
or medium-scale integration (SSI or MSI) and multilayered printed circuits.

• The fourth generation (1974-1991) used large-scale or verylarge-scale integration (LSI or VLSI). Semiconductor
memory replaced core memory as computers moved from the third to the fourth generation.

• The fifth generation (1991 present) is highlighted by the use of high-density and high-speed processor and
memory chips based on even more improved VLSI technology.
• For example, 64-bit 150-MHz microprocessors are now available on a single chip with over one million
transistors. Four-megabit dynamic random-access memory (RAM) and 256K-bit static RAM are now in
widespread use in today's high-performance computers.

21
Evolution of Computer Architecture
Flynn's Classification
• Flynn’s Classification is the most popular taxonomy of computer
architecture, proposed by Michael J. Flynn in 1966 based on number
of instructions and data stream.
• Instruction Stream: it is defines as the sequence of instructions
executed by the processing unit.
• Data stream: it is defined as the sequence of the data including
inputs, partial or temporary results called by the instruction stream.
Flynn's Classification

• Michael Flynn (1972) introduced a classification of various computer


architectures based on notions of instruction and data streams.

• Single instruction stream, single data stream (SISD)


• Single instruction stream, multiple data stream (SIMD)
• Multiple instruction stream, single data stream (MISD)
• Multiple instruction stream, multiple data stream (MIMD).
SISD

• These System have one sequential incoming data stream and one
single processing unit to execute the data stream. They are just like
uniprocessor systems having parallel computing architecture.

Advantages of SISD
•It requires less power.
•There is no issue of complex communication protocol between multiple cores.
Disadvantages of SISD
•The speed of SISD architecture is limited just like single-core processors.
•It is not suitable for larger applications.

•Example: Single CPU workstations, Minicomputers, Mainframes, IBM 7001 are SISD computers
SIMD
• such kind of systems would have multiple incoming data streams and number of processing units
that can act on a single instruction at any given time. They are just like multiprocessor systems
having parallel computing architecture.

Advantages of SIMD
•Throughput of the system can be increased by increasing the number of cores of the processor.
•Same operation on multiple elements can be performed using one instruction only.
•Processing speed is higher than SISD architecture.
Disadvantages of SIMD
•There is complex communication between numbers of cores of processor.
•The cost is higher than SISD architecture.

•Example: Array Processor and vector pipelines


MISD
• Systems with MISD stream have number of processing units
performing different operations by executing different instructions on
the same data set.

The representatives of MISD architecture do not yet exist commercially.


MIMD
• In the system using MIMD architecture, each processor in a multiprocessor system can execute
different sets of instructions independently on the different set of data set in parallel. It is
opposite to SIMD architecture in which single operation is executed on multiple data sets.

Normal multiprocessor uses the MIMD architecture. These architectures are basically used in a number of
application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling,
communication switches, etc.

Eg. Grids, IBM-370, X-MP etc.

You might also like