0% found this document useful (0 votes)
156 views

Computer Architecture II Cte 412 Presentation

The document discusses the Pentium personal computer (PC). It begins by providing background on the Pentium microprocessor developed by Intel in 1993. It then discusses the basic components and types of personal computers, including desktops, laptops, tablets, and smartphones. The rest of the document discusses specifics of the Pentium PC, including its architecture, 64-bit data bus, superscalar design, and use as the processor of choice for most personal computers from the 1990s onwards. It also covers basic PC hardware components like the processor, memory, and registers. Finally, it discusses binary, hexadecimal, and assembly languages as used by the Pentium processor.

Uploaded by

Tom Sam
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views

Computer Architecture II Cte 412 Presentation

The document discusses the Pentium personal computer (PC). It begins by providing background on the Pentium microprocessor developed by Intel in 1993. It then discusses the basic components and types of personal computers, including desktops, laptops, tablets, and smartphones. The rest of the document discusses specifics of the Pentium PC, including its architecture, 64-bit data bus, superscalar design, and use as the processor of choice for most personal computers from the 1990s onwards. It also covers basic PC hardware components like the processor, memory, and registers. Finally, it discusses binary, hexadecimal, and assembly languages as used by the Pentium processor.

Uploaded by

Tom Sam
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 13

YABA COLLEGE OF TECHNOLOGY

SCHOOL OF INDUSTRIAL AND MANUFACTURING


ENGINEERING

DEPARTMENT OF COMPUTER ENGINEERING

COURSE: - COMPUTER ARCHITECTURE II

COURSE CODE: - CTE 412

A
PRESENTATION
ON
PENTIUM PC
ASSEMBLLY LANGUAGE & DATATYPES
TYPES OF OPERATION

LECTURER
ENGR. OLOYEDE. A. O.

STUDENT
NAME MATRIC. No:
AREAGO TOMIWA SAMUEL F/HD/19/3410003
ADERONKE ABIMBOLA OSUNLOLA F/HD/19/3410006

1 | COMPUTER ARCHITECTURE II CTE 412


PENTIUM PERSONAL COMPUTER (PC)

Pentium is a brand used for a series of x86 architecture-compatible microprocessors produced


by Intel. Pentium, family of microprocessors developed by Intel Corp. Introduced in 1993 as
the successor to Intel’s 80486 microprocessor, the Pentium contained two processors on a
single chip and about 3.3 million transistors. Using a CISC (complex instruction set
computer) architecture, its main features were a 32-bit address bus, a 64-bit data bus, built-in
floating-point and memory-management units, and two eight-kilobyte caches. It was
available with processor speeds ranging from 60 to 200 megahertz (MHz).

Fig - Pentium microprocessor, Intel, 1993

2 | COMPUTER ARCHITECTURE II CTE 412


Fig - Pentium microprocessor, Intel, 1993

Fig: - Pentium Intel Microprocessor

PERSONAL COMPUTER (PC)


A personal computer (PC) is a multi-purpose computer whose size, capabilities, and price
make it feasible for individual use. The personal computers do not require an individual to be
a computer expert or technician, unlike large, costly minicomputers and mainframes, time-
sharing by many people at the same time is not used with personal computers. Personal
computers can be classified in stationary and portable. Types of stationary personal computer
includes: - Desktop Computer, Workstation, etc. While, the types of portable computer
include: - Laptop, Tablet, Smart phone, etc.

3 | COMPUTER ARCHITECTURE II CTE 412


IMAGES OF PERSONAL COMPUTERS (PC)

Fig: -The three personal computers referred to by Byte Magazine as the "1977 Trinity" of
home computing: The Commodore PET, the Apple II, and the TRS-80 Model I.

Fig: - Tablet and Smartphone


Fig: - Laptop

4 | COMPUTER ARCHITECTURE II CTE 412


THE PENTIUM PERSONAL COMPUTERS (PC)
The Pentium quickly became the processor of choice for personal computers. It was
superseded by ever faster and more powerful processors: the Pentium Pro (1995), the
Pentium II (1997), the Pentium III (1999), and the Pentium 4 (2000). In 2006 Intel introduced
the Core family of microprocessors, and the Pentium family became a midrange line used for
inexpensive personal computers.
The Pentium processor was the first x86 processor with superscalar architecture.1 The
Pentium processor also features a 64-bit external data bus, which doubles the amount of
information it is possible to read or write on each memory access.
The Pentium processor were designed to operate as a microprocessor. A microprocessor is a
general-purpose central processing unit (CPU) manufactured on a single integrated circuit. To
be useful, a microprocessor has to work with other components, such as a memory system for
storing instructions and data, and external circuits to communicate with the peripheral
environment.
A microprocessor is merely the processing core of an application system. Sometimes, a
microprocessor and some commonly used circuits and components (e.g., memory, parallel
I/O, serial I/O, clock circuit, etc.) are integrated together on a single chip, which is typically
called a microcontroller. A microcontroller is truly a computer on a chip. Using a
microcontroller can greatly ease the hardware architecture design, especially when it has all
the necessary peripherals for the system to be developed

THE OPERATION OF THE PROCESSOR


Basic features of pc hardware comprise of the main internal hardware of a PC, which
consists of processor, memory, and registers. Registers are processor components that hold
data and address. To execute a program, the system copies it from the external device into the
internal memory. The processor executes the program instructions. The fundamental unit of
computer storage is a bit; it could be ON (1) or OFF (0) and a group of 8 related bits makes a
byte on most of the modern computers. Therefore, the parity bit is used to make the number
of bits in a byte odd. If the parity is even, the system assumes that there had been a parity
error (though rare), which might have been caused due to hardware fault or electrical
disturbance. The processor supports the following data sizes : -
 Word: a 2-byte data item
 Doubleword: a 4-byte (32 bit) data item
 Quadword: an 8-byte (64 bit) data item
 Paragraph: a 16-byte (128 bit) area
 Kilobyte: 1024 bytes
 Megabyte: 1,048,576 bytes

5 | COMPUTER ARCHITECTURE II CTE 412


Binary Number System
Every number system uses positional notation, i.e., each position in which a digit is written
has a different positional value. Each position is power of the base, which is 2 for binary
number system, and these powers begin at 0 and increase by 1.
The value of a binary number is based on the presence of 1 bits and their positional value. So,
the value of a given binary number is: - 1 + 2 + 4 + 8 +16 + 32 + 64 + 128 = 255
which is same as 28 - 1.

Hexadecimal Number System


Hexadecimal number system uses base 16. The digits in this system range from 0 to 15. By
convention, the letters A through F is used to represent the hexadecimal digits corresponding
to decimal values 10 through 15.
Hexadecimal numbers in computing is used for abbreviating lengthy binary
representations. Basically, hexadecimal number system represents a binary data by dividing
each byte in half and expressing the value of each half-byte. The following table provides the
decimal, binary, and hexadecimal equivalents −
To convert a binary number to its hexadecimal equivalent, break it into groups of 4
consecutive groups each, starting from the right, and write those groups over the
corresponding digits of the hexadecimal number.
Example: - Binary number 1000 1100 1101 0001 is equivalent to hexadecimal - 8CD1
To convert a hexadecimal number to binary, just write each hexadecimal digit into its 4-digit
binary equivalent.
Example − Hexadecimal number FAD8 is equivalent to binary - 1111 1010 1101 1000

Addressing Data in Memory


The process through which the processor controls the execution of instructions is referred as
the fetch-decode-execute cycle or the execution cycle. It consists of three continuous steps −
 Fetching the instruction from memory
 Decoding or identifying the instruction
 Executing the instruction
The processor may access one or more bytes of memory at a time. Let us consider a
hexadecimal number 0725H. This number will require two bytes of memory. The high-order
byte or most significant byte is 07 and the low-order byte is 25.
The processor stores data in reverse-byte sequence, i.e., a low-order byte is stored in a
low memory address and a high-order byte in high memory address. So, if the processor
brings the value 0725H from register to memory, it will transfer 25 first to the lower memory
address and 07 to the next memory address.

6 | COMPUTER ARCHITECTURE II CTE 412


x: memory address
When the processor gets the numeric data from memory to register, it again reverses the
bytes. There are two kinds of memory addresses −
Absolute address - a direct reference of specific location.
Segment address (or offset) - starting address of a memory segment with the offset value.

ASSEMBLY LANGUAGE
What is Assembly Language?
Every personal computer has a microprocessor that manages the computer's arithmetical,
logical, and control activities. Each family of processors has its own set of instructions for
handling various operations such as getting input from keyboard, displaying information on
screen and performing various other jobs. These set of instructions are called 'machine
language instructions'.
An assembly language is a type of low-level programming language that is intended to
communicate directly with a computer’s hardware. Unlike machine language, which consists
of binary and hexadecimal characters, assembly languages are designed to be readable by
humans. A processor understands only machine language instructions, which are strings of
1's and 0's. However, machine language is too obscure and complex for using in software
development. So, the low-level assembly language is designed for a specific family of
processors that represents various instructions in symbolic code and a more understandable
form.
Low-level programming languages such as assembly language are a necessary bridge
between the underlying hardware of a computer and the higher-level programming languages
such as Python or JavaScript, in which modern software programs are written.
Definition of Assembly Language
An assembly language can be defined as a programming language made up of operation
mnemonics and symbolic data locations. The assembly language programmer makes use of
instruction mnemonics and symbolic names of addresses rather than work with operation
codes and operand addresses.

7 | COMPUTER ARCHITECTURE II CTE 412


Advantages of Assembly Language
1. The task of learning and writing the language is easier than with machine language.
2. Macroinstructions enable one instruction to be translated into several machine language
instructions.
3. Assembly languages can be used for programming closed subroutines.
4. Having a better understanding of assembly language makes one aware of: -
 How programs interface with OS, processor, and BIOS;
 How data is represented in memory and other external devices;
 How the processor accesses and executes instruction;
 How instructions access and process data;
 How a program accesses external device.
Other advantages of using assembly language are −
a) It requires less memory and execution time;
b) It allows hardware-specific complex jobs in an easier way;
c) It is suitable for time-critical jobs;
d) It is most suitable for writing interrupt service routines and other memory resident
programs.
Disadvantages of Assembly Language
 Assembly languages are more difficult to learn than high-level languages.
 An assembly language program is more difficult to modify than corresponding high-level
language program.
 An assembly language is machine dependent.

How Assembly Languages Work


Basically, the most basic instructions executed by a computer are binary codes, consisting of
ones and zeros. Those codes are directly translated into the “on” and “off” states of the
electricity moving through the computer’s physical circuits. In essence, these simple codes
form the basis of “machine language”, the most fundamental variety of programming
language.
Definitely, no human would be able to construct modern software programs by
explicitly programming ones and zeros. Instead, human programmers must rely on various
layers of abstraction that can allow themselves to articulate their commands in a format that
is more intuitive to humans. Specifically, modern programmers issue commands in so-called
“high-level languages”, which utilize intuitive syntax such as whole English words and
sentences, as well as logical operators such as “And”, “Or”, and “Else” that are familiar to
everyday usage.

8 | COMPUTER ARCHITECTURE II CTE 412


Ultimately, however, these high-level commands need to be translated into machine
language. Rather than doing so manually, programmers rely on assembly languages whose
purpose is to automatically translate between these high-level and low-level languages. The
first assembly languages were developed in the 1940s, and although modern programmers
spend very little time dealing with assembly languages, they nevertheless remain essential to
the overall functioning of a computer.

Real World Example of an Assembly Language


Today, assembly languages remain the subject of study by computer science students, in
order to help them understand how modern software relates to its underlying hardware
platforms. In some cases, programmers must continue to write in assembly languages, such as
when the demands are performance are especially high, or when the hardware in question is
incompatible with any current high-level languages.
One such example that is relevant to finance are the high-frequency trading (HFT) platforms
used by some financial firms. In this marketplace, the speed and accuracy of transactions is of
paramount importance in order for the HFT trading strategies to prove profitable. Therefore,
in order to gain an edge against their competitors, some HFT firms have written their trading
software directly in assembly languages, thereby making it unnecessary to wait for the
commands from a higher-level language to be translated into machine language.

DATA TYPES
Unlike humans, a computer does not know the difference between "1234" and "abcd." A data
type is a classification that dictates what a variable or object can hold in computer
programming. Data types are an important factor in virtually all computer programming
languages, including C#, C++, JavaScript, and Visual Basic. When programmers create
computer applications, both desktop and web-based, data types must be referenced and used
correctly to ensure the proper result and an error-free program.
Common examples of data types includes: -
 Boolean (e.g., True or False)
 Character (e.g., a)
 Date (e.g., 03/01/2016)
 Double (e.g., 1.79769313486232E308)
 Floating-point number (e.g., 1.234)
 Integer (e.g., 1234)
 Long (e.g., 123456789)
 Short (e.g., 0)
 String (e.g., abcd)
 Void (e.g., no data)

9 | COMPUTER ARCHITECTURE II CTE 412


Depending on the programming language, there may also be many more data types that serve
a specific function and store data in a particular way. Understanding the different data types
allows programmers to design computer applications more efficiently and accurately.

TYPES OF OPERATION
The number of different opcodes varies widely from machine to machine. However, the same
general types of operations are found on all machines. A useful and typi¬cal categorization is
the following:
1. Data transfer
2. Arithmetic
3. Logical
4. Conversion
5. I/O
6. System control
7. Transfer of control

1) Data transfer:
The most fundamental type of machine instruction is the data transfer instruction. The data
transfer instruction must specify several things. The location of the source and destination
operands must be specified. Each location could be memory. a register, or the lop of the
stack. The length of data to be transferred must be indicated. As with all instructions with
operands, the mode of addressing for each operand must be specified.
In term of CPU action, data transfer operations are perhaps the simplest type. If both source
and destination are registers, then the CPU simply causes data to be transferred from one
register to another; this is an operation internal to the CPU. If one or both operands are in
memory, then (the CPU must perform some or all of following actions:
 Calculate the memory address, based on the address mode
 If the address refers to virtual memory, translate from virtual to actual mem¬ory
address.
 Determine whether the addressed item is in cache.
 If not, issue a command lo the memory module.

10 | COMPUTER ARCHITECTURE II CTE 412


Example:

2) Arithmetic:
Most machines provide the basic arithmetic operations of add, subtract, multiply, and divide.
These are invariably provided for signed integer (fixed-point) numbers, Often they are also
provided for floating-point and packed decimal numbers.
Other possible operations include a variety of single-operand instructions: for example.
 Absolute: Take the absolute value of the operand.
 Negate: Negate the Operand.
 Increment.: Add 1 to the operand.
 Decrement: Subtract 1 from the operand.

3) Logical:
Most machines also provide a variety of operations for manipulating individual bits of a word
or other addressable units, often referred to as "bit twiddling." They are based upon Boolean
operations. Some of the basic logical operations that can be performed on Boolean or binary
data are AND, OR, NOT, XOR, etc. These logical operations can be applied bitwise to n-bit
logical data units. Thus, if two registers contain the data
(R1) - 10100101 (R2) - 00001111
then
(R1) AND (R2) – 00000101
In addition of bitwise logical operations, most machines provide a variety of shifting and
rotating functions such as shift left, shift right, right rotate, left rotate.

11 | COMPUTER ARCHITECTURE II CTE 412


4) Conversion:
Conversion instructions are those that change the format or operate on the format of data. An
example is converting from decimal to binary.

5) Input/Output:
There are a variety of approaches taken, including isolated programmed IO, memory-mapped
programmed I/O, DMA, and the use of an I/O processor. Many implementations provide only
a few I/O instructions, with the specific actions specified by parameters, codes, or command
words.

6) System Controls:
System control instructions are those that can be executed only while the processor is in a
certain privileged state or is executing a program in a special privileged area of memory,
typically, these instructions are reserved for the use of the operating system. Some examples
of system control operations are as follows. A system control instruction may read or alter a
control register. Another example is an instruction to read or modify a storage protection key,
such is used in the S/390 memory system. Another example is access to process control
blocks in a multiprogramming system.

7) Transfer of control:
For all of the operation types discussed so far. The next instruction to be performed is the one
that immediately follows, in memory, the current instruction. However, a significant fraction
of the instructions in any program has as their function changing the sequence of instruction
execution. For these instructions, the operation performed by the CPU is to update the
program counter to contain the address of some instruction in memory.

There are a number of reasons why transfer-of-control operations are required. Among the
most important are the following:
1. In the practical use of computers, it is essential to be able to execute each instruction
more than once and perhaps many thousands of times. It may require thousands or
perhaps millions of instructions to implement an application. This would be
unthinkable if each instruction had to be written out separately. If a table or a list of
items is to be processed, a program loop is needed. One sequence of instructions is
executed repeatedly to process all the data.

12 | COMPUTER ARCHITECTURE II CTE 412


2. Virtually all programs involve some decision making. We would like the computer to
do one thing if one condition holds, and another thing if another condition holds. For
example, a sequence of instructions computes the square root of a number. At the start
of the sequence, the sign of the number is tested, If the number is negative, the
computation is not performed, but an error condition is reported.
3. To compose correctly a large or even medium-size computer program is an
exceedingly difficult task. It helps if there are mechanisms for breaking the task up
into smaller pieces that can be worked on one at a time.

13 | COMPUTER ARCHITECTURE II CTE 412

You might also like