Comptia a+ 220-801 Manual
Comptia a+ 220-801 Manual
220-801
In order to receive the CompTIA A+ certification, the student must pass two exams.
CompTIA A+ 220-801 and CompTIA A+ 220-802.CompTIA A+ 220-801 covers the
fundamentals of computer technology, installation and configuration of PCs, laptops
and related hardware, and basic networking.
This course has been developed and enhanced to equip you with practical skills to
support computer users in small, medium and large organizations as well as to
match the needs of the CompTIA A+ Certification curriculum.
This course contains accurate information at the time of print. We are committed to
enhancing the course from time to time to keep up with the changes in the dynamic
IT (Information Technology) industry. The course includes both theoretical chapters
and practical exercises for effective learning.
We hope you will find the material useful both for studying and for future reference.
We welcome your feedback on any issue relating to this courseware and wish you
the best in your exam!
Happy Learning!
INTRODUCTION TO COMPUTERS
Objectives
Introduction
Computer classification
Functions of a PC
PC Components
The system Unit
Multimedia devices
Labs
Introduction
Computers are machines that perform tasks or calculations according to a set of instructions,
or programs. The first fully electronic computers, introduced in 1942, were huge machines
(e.g. the ENIAC – Electronic Numerical Integrator and Computer) that required teams of
people to operate. Compared to those early machines, today's computers are amazing. Not
only are they thousands of times faster, they can fit on your desk, on your lap, or even in
your pocket.
Computers work through an interaction of hardware and software. Hardware refers to the
parts of a computer that you can see and touch; mouse, keyboard, monitor, including the
case and everything inside it. Hardware items such as your monitor, keyboard, mouse,
printer, and other components are often called hardware devices, or devices.
Software refers to the instructions, or programs, that tell the hardware what to do. A word-
processing program that you can use to write letters on your computer is a type of software.
The operating system (OS) is software that manages your computer and the devices
connected to it. Windows is a well-known operating system.
Classification of computers
Computers range in size and capability. At one end of the scale are supercomputers, very
large computers with thousands of linked microprocessors that perform extremely complex
calculations. At the other end are tiny computers embedded in cars, TVs, stereo systems,
calculators, and appliances. These computers are built to perform a limited number of tasks.
Classification by size:
1. Supercomputers:
Supercomputers are the most powerful computers. They are used for problems requiring
complex calculations. Because of their size and expense, supercomputers are relatively
rare.
Supercomputers are used by universities, government agencies, and large businesses.
They are widely used in scientific applications such as aerodynamic design simulation,
processing of geological data.
Note:
Super Computer = ”Super-fast processing speed”.
Discover supercomputer “The heart of the NASA Center for Climate Simulation
(NCCS)”
2. Mainframe Computers:
Mainframes are usually slower, less powerful and less expensive than supercomputers. A
technique that allows many people at terminals, to access the same computer at one time is
called time sharing. Mainframes are used by banks and many businesses to update
inventory etc.
Note:
Main Frame Computer = ”Multiple Program-User Support”
3. Minicomputers:
Mini computers are a smaller version of Mainframe Computers. They are general purpose
computers, and give computing power without adding the prohibitive expenses associated
with larger systems. It is generally easier to use.
4. Workstations:
Workstations are powerful single-user computers. Workstations are used for tasks that
require a great deal of number-crunching power, such as product design and computer
animation. Workstations are often used as network and Internet servers.
Microcomputers are more commonly known as personal computers. The term “PC” is
applied to IBM-PCs or compatible computers. Desktop computers are the most common
type of PC. Notebook (laptop) computers are used by people who need the power of a
desktop system, but also portability. Handheld PCs (such as PDAs) lack the power of
a desktop or notebook PC, but offer features for users who need limited functions and small
size.
The personal computer, or PC, is designed to be used by one person at a time. The
various kinds of personal computers include desktops, laptops, handheld computers, and
Tablet PCs.
a) Desktop computers
Desktop computers are designed for use at a desk or table. They are typically larger and
more powerful than other types of personal computers. Desktop computers are made up of
separate components. The main component, called the system unit, is usually a rectangular
case that sits on or underneath a desk. Other components, such as the monitor, mouse, and
keyboard, connect to the system unit.
Desktop computer
c) Smartphones
Smartphones are mobile phones that have some of the same capabilities as a computer.
You can use a smartphone to make telephone calls, access the Internet, organize contact
information, send e-mail and text messages, play games, and take pictures. Smartphones
usually have a keyboard and a large screen.
Smartphone
d) Handheld computers
Handheld computers, also called personal digital assistants (PDAs), are battery-
powered computers small enough to carry almost anywhere. Although not as powerful as
desktops or laptops, handheld computers are useful for scheduling appointments, storing
addresses and phone numbers, and playing games. Some have more advanced capabilities,
such as making telephone calls or accessing the Internet. Instead of keyboards, handheld
computers have touch screens that you use with your finger or a stylus (a pen-shaped
pointing tool).
Handheld computer
e) Tablet PCs
Tablet PCs are mobile PCs that combine features of laptops and handheld computers. Like
laptops, they're powerful and have a built-in screen. Like handheld computers, they allow
you to write notes or draw pictures on the screen, usually with a tablet pen instead of a
stylus. They can also convert your handwriting into typed text. Some Tablet PCs are
“convertibles” with a screen that swivels and unfolds to reveal a keyboard underneath.
Tablet PC
b) According to Purpose
Functions of a PC
The four main and basic functions of a computer system are Input, Processing, Storage and
Output.
a) Input: This is the transferring of data into a computer system. This is accomplished
by the keying in of data via a keyboard or issuance of instructions using a mouse.
Input devices are at input.
b) Processing: This refers to the manipulation and control of information within the
computer system. Such manipulations are handled by the Control Unit, the
Arithmetic Logic Unit and Temporary Storage.
c) Storage: This is the means by which information can be "permanently" saved (until
such time as you wish to delete it). This usually occurs on a hard drive, a flash disk,
a CD or any other storage device.
d) Output: The results of the processing are made available for use by any user or
other devices. The most common ways of producing such outputs are through
computer monitor, speakers, and printers. When a computer is connected to other
devices, including through Internet, this output is in the form of electrical pulses. The
output data can also be recorded on to an external recording medium such as a DVD
disk.
PC Components
A basic pc will have the following parts as illustrated in the diagram below.
The system unit (system case or chassis) is the core of a computer system. It houses the
power supply, motherboard, processor, memory modules, expansion cards, hard drive,
optical drive, and other drives. A computer case can be a tower case, a desktop case that
lies flat on a desk, an all-in-one case used with an all-in-one computer, or a mobile case
used with laptops and tablet PCs. A tower case sits upright and can be as high as two feet
and has room for several drives. Often used for servers, this type of case is also good for PC
users who anticipate upgrading because tower cases provide maximum space for working
inside a computer and moving components around. A desktop case lies flat and sometimes
serves double-duty as a monitor stand.
Tower
chassis
Desktop chassis
RAM Modules
Ports
A port is a connector which is used to connect peripherals with thecomputer. They are
located at the back of the system unit. Peripherals are devices that are attached to a
computer system to enhance its capabilities. Peripherals include input devices, output
devices, storage devices, and communications devices.
All peripherals must have some way to access the data bus of the computer (the
communications channel on the motherboard that connects the processor, RAM, and other
components). To do this, peripherals are connected via some kind of port on the computer
(and a cable with the proper connectors is needed).
A multimedia device is a device that uses different forms of media such as text, audio, or
video within a single device. These devices make it easier for people to use and complete
specific tasks. A cell phone is a multimedia device because it allows the user to make phone
calls, send text messages, download from the Internet, and take pictures. Cameras also
allow users to take pictures and download video and audio. Other multimedia devices
include tablet computers, memory cards, microphones, digital cameras, biometric devices
and MP3 players.
Expansion Slot /
Daughter board
Required
Various types and sizes of screwdrivers, a parts grabber, a flashlight, a small container for
holding extra screws and jumpers, a multimeter, and antistatic equipment, such as a wrist
strap and antistatic mat.
Safety First
Safe equipment handling begins with using the appropriate tools, taking care when moving
equipment, protecting yourself and equipment from electrostatic discharge, avoiding damage
to transmissions and data from electromagnetic interference, and taking appropriate
precautions when handling computers.
Before you open a computer case, be sure to turn the power supply off and unplug any power
cords. Then, to prevent damage to the system, equalize the electrical charge between your body
and the components inside your computer. If nothing else, touch a rounded portion of your
computer’s chassis. A better option is to place the computer on a grounded mat, which you touch
before working on the PC. You could also wear an antistatic wrist strap.
MOTHERBOARD
Objectives
Introduction
Motherboard Components
Firmware
Bios
Motherboard Form Factors
Labs
Introduction
A computer motherboard is the printed circuit board which is used to connect together all the
internal components of a computer. The CPU, Hard drives, CD ROM, Video card and Sound
card, all are connected with each other through motherboard. It also has ports through which
many external devices are connected to computer. It is called motherboard because it
connects and holds together all the components of a computer. Motherboard comes in
different sizes, shapes and models. The height and width of the motherboard is known as
motherboard form factor.
The type of motherboard installed in a PC has a great effect on system speed and
expansion capabilities.
PC Motherboards Page 15
Chapter 2
Motherboard Components
The motherboard holds all the major logic components of the computer. Here we are going
to see with no particular order, the motherboard components and their function in a
computer.
A common CPU socket is the ZIF (Zero Insertion Force) socket, which is square, has a
retention lever and accommodates a certain number of pins to match certain CPU’s which
have a pin grid array (PGA), meaning that it has rows and columns of pins.
PC Motherboards Page 16
Chapter 2
Socket Processors
LGA 775 Intel only: Pentium 4, Pentium 4 Extreme Edition (single core), Pentium D,
(Socket T) Celeron D, Pentium Extreme Edition (dual core), Core 2 Duo, Core 2
Extreme, Core 2 Quad, Xeon, Celeron (4xx, Exxxx series)
LGA 1155 Intel only: Replacement for LGA 1156 to support CPUs based on the
(Socket H2) Sandy Bridge (such as Celeron G4xx and G5xx) and eventual Ivy Bridge
architectures
LGA 1156 Intel only: Celeron (G1xxx series), Core i3, Core i5, Core i7 (8xx series),
(Socket H) Pentium (G6xxx series), Xeon (34xx series)
LGA 1366 Intel only: Core i7 (9xx series), Xeon (35xx, 36xx, 55xx, 56xx series), Intel
(Socket B) Celeron P1053
Socket AM2 AMD only: Athlon 64, Athlon 64 X2, Athlon 64 FX, Opteron, Sempron,
Phenom
Socket AM2+ AMD only: Often backward compatible with AM2 CPUs as well as Athlon II
and Phenom II and forward compatible with AM3 CPUs
Socket AM3 AMD only: DDR3 capable CPUs only (thus not compatible with AM2+
CPUs), such as Phenom II, Athlon II, Sempron, Opteron 138x, and has
the potential to accept AM3+ CPUs
Socket AM3+ AMD only: Specified for CPUs based on the Bulldozer microarchitecture
and designed to accept AM3 CPUs
Socket FM1 AMD only: Designed to accept AMD Fusion APUs that incorporate CPUs
and GPUs, such as the E2-3200 and the A Series
Socket F (LGA) AMD only: Opteron (2xxx, 8xxx series), Athlon 64 FX (FX-7x series), and
replaced by Sockets C32 and G34
PC Motherboards Page 17
Chapter 2
Memory Slots
The motherboard will have from one to four slots or sockets for system memory. Depending
on the vintage and the manufacturer of the motherboard, the memory sockets for DRAM are
SIMM (Single Inline Memory Module), DIMM (Dual Inline Memory Module), and RIMM
(Rambus Inline Memory Module). SIMM is the oldest technology, and you will not see these
sockets in new PCs. The current standards are DIMM and RIMM. Both of these physical
memory slot types move data 64 bits at a time. DIMM sockets for desktop or tower PCs may
have 168 pins, 184 pins, or 240 pins. RIMM sockets for non-portable PCs have 184 pins.
SIMM Slots
Rimm Slots
External Cache Memory
Cache memory in a computer provides a very fast memory and it compensates for speed
differences between components. The cache on the motherboard is located between the
CPU and the main memory. A CPU moves data to and from memory faster than the system
RAM can respond and the cache aids in this.
Cache memory runs faster than typical RAM and is able to “guess” the instructions that the
processor is likely to nee, and retrieve those instructions from the RAM or the hard drive in
advance.
PC Motherboards Page 18
Chapter 2
Bus Architecture
“Bus” refers to the pathways that power, data, and control signals use to travel from one
component to another in the computer. There are different types of buses, including the
processor bus, used by the data travelling into and out of the processor. The address and
the data busses are both part of the processor bus. Another type of bus is the memory bus,
which is located on the motherboard and used by the processor to access memory.
In addition, each PC has expansion buses of one or more types. PC expansion busses
attach through a controller that attaches to the system or the expansion bus. Examples of
expansion buses are CNR (Communication and Networking Riser), AGP (Accelerated
Graphics Port), PCI (Peripheral Component Interconnect) etc.
PC Motherboards Page 19
Chapter 2
Expansion Slots
Chipsets
A critical component of the motherboard is the chipset. When technicians talk about the
chipset, they are usually referring to one or more chips designed to work hand in glove with
the CPU. One part of this chipset, referred to as the Northbridge, controls communications
between the CPU and other critical motherboard components. These are the system RAM
and the PCI, AGP, and PCIe busses. Another portion, the Southbridge, manages
communications between the CPU and such I/O busses as USB, IDE, PS2, SATA, and
others.
Southbridge
Northbridge
CPU Socket
DIP (Dual In-line Package) switches are small electronic switches found on the circuit board
that can be turned on or off just like a normal switch. They are very small and so are usually
flipped with a pointed object such as the tip of a screwdriver, bent paper clip or pen top. Care
should be taken when cleaning near DIP switches as some solvents may destroy them. Dip
switches are obsolete as you will not find them in modern systems.
Jumper pins
Jumpers are small protruding pins on the motherboard. A jumper cap or bridge is used to
interconnect or short a pair of the jumper pins. When the bridge is connected to any two pins
via a shorting link, it completes the circuit and a certain configuration has been achieved.
Jumper cap
This is a metal bridge that closes an electrical circuit. Typically, a jumper consists of a plastic
plug that fits over a pair of protruding pins. Jumpers are sometimes used to configure
PC Motherboards Page 20
Chapter 2
expansion boards. By placing a jumper plug over a different set of pins, you can change a
board's parameters.
Jumper Pins
The riser card is installed perpendicular to the case and may include several expansion
slots. An expansion card inserted into a riser card is on the same plane as the motherboard.
Riser cards are available for the standard bus architectures, such as AGP, PCI, and PCIe.
Ironically, you will find riser cards both in the largest network servers and in the smallest of
low-profile desktop computer cases. In the case of network servers, the use of a riser card
allows the addition of more cards than the standard motherboard allows. Otherwise, the
additional expansion boards would increase the size of the motherboards beyond the size of
even the large cases used for servers.
At the other extreme are the scaled-down low-profile computers, which cannot
accommodate most expansion cards because the case height is so low. The riser card
allows one or more expansion cards to be installed and does not require a full height case.
The second type of riser card is a small expansion card containing multiple functions. The
two standards for this type of riser card are AMR and CNR. Both of these standards add
multiple functions at low cost.
PC Motherboards Page 21
Chapter 2
AMR
AMR (Audio Modem Riser), introduced in the late 1990s, allows for the creation of lower-cost
sound and modem solutions. The AMR card plugs directly into a special slot on the
motherboard and utilizes the CPU to perform modem functions, using up to 20 percent of the
available processor power for this purpose. The advantage of this is the elimination of
separate modem and sound cards without tying up a PCI slot in newer computers. The AMR
card connects directly to a telephone line and audio output devices. One of the shortcomings
of AMR was that it was not Plug and Play.
ACR
ACR (Advanced Communications Riser) is a standard introduced in 2000 by AMD, 3Com,
and others to supersede AMR. It uses one PCI slot and provides accelerated audio and
modem functions as well as networking, and it supports multiple Ethernet NICs. With ACR,
one telephone jack could be used for both modem telephone jack. The ACR PCI slot is blue,
and the pin orientation is the reverse of the standard PCI slot.
CNR
The first CNR (Communication Network Riser) card was introduced in 2000. Similar to AMR
except that it does support Plug and Play, CNR also supports LAN in addition to audio,
modem, and multimedia systems. The CNR card plugs directly into the motherboard, thus
eliminating the need for separate cards for each capability and reducing the cost of
expansion cards.
Power Supply
A power supply unit is the components that provides power for all components on the
motherboard and internal to the PC case. The PSU is easily identifiable within the PC case;
it is located at the back of the computer’s interior, and it is visible from the outside when you
look at the back of the PC.
PC Motherboards Page 22
Chapter 2
The PSU
A+ Exam Tip: The A+ 220-801 exam expects you to recognize and know the more
important features of the ATX and micro-ATX form factors used by power supplies.
PC Motherboards Page 23
Chapter 2
Connector Description
20-pin P1 connect is the main motherboard power connector used
in the early ATX systems.
PC Motherboards Page 24
Chapter 2
4-pin Berg connector used by a floppy disk drive (FDD).
IDE Connector
The connector connects to older hard drive disks and optical drives for data transfer to which
you will insert an IDE cable (supplied with motherboard) IDE cables connect devices such as
hard disks, CD Drives and DVD Drives.
The current 4 standards of IDE devices are ATA 33/66/100 and 133. The numbers specify
the amount of data in Mb/s in a max burst situation. In reality there is not much chance of
getting a sustain data rate of this magnitude. Both the connectors and devices are
backwards compatible with each other; however they will only run at the slowest rated speed
between them.
All IDE cables will come with a red line down one side, this red line is to show which way it
should be plugged in. The red line should always connect to pin one of the IDE port.
Checking your motherboard documentation should show you which end is pin one. In some
cases it will be written on the board itself.
In the case of ATA 66/100/133 there is a certain order that you plug devices in, the cable is
colour coded to help you get them in the correct order.
The Blue connector should be connected to the system board
The Black connector should be connected to the master device
The Grey Connector should be connected to the slave device
PC Motherboards Page 25
Chapter 2
IDE (Integrated Drive Electronics, ATA or PATA) is used for older hard drives and optical
drives). IDE interface is still used quite often, but it has been recently replaced by its
successor SATA as the main for this is the lower price of the SATA devices.
The System panel cables are two wire cables that are color coded to help identify where
they connect to the motherboard system panel connector. The black or white wire is the
ground (GND) wire and the colored wire is the powered wire.
Connector
The cables, colors, and connections vary depending on the computer case and
motherboard you have, however, generally include the cables mentioned below.
PC Motherboards Page 26
Chapter 2
HDD LED (IDE LED) - The LED activity light for the hard drive. This is the LED that
flashes as information is being written and read from the hard drive.
Power LED (PLED) - The LED power light, which indicates when the computer is on,
off, or in Standby.
Power SW (PWRSW) - Controls the power button that allows you to turn on and off
the computer.
Reset SW - Handles the reset button to restart the computer.
Speaker - The internal speaker used to sound the beep noises you hear from your
computer when it is booting.
Firmware
Firmware refers to software instructions, usually stored on ROM chips. Firmware exists on
most PC components, such as video adapters, hard drives, network adapters, and printers.
These instructions are always available, so they are not reprogrammed every time the
computer is started.
CMOS
One important type of firmware is CMOS (complementary metal-oxide semiconductor).
The CMOS chip retains settings such as the time, keyboard settings, and boot sequence.
The CMOS also stores interrupt request line (IRQ) and input/output (I/O) resources that the
BIOS uses when communicating with the computer’s devices. The CMOS chip is able to
keep these settings in the absence of computer power because of a small battery, which is
usually good for from two to ten years.
If the system repeatedly loses track of time when turned off, you probably need to replace
the battery.
CMOS RAM
Motherboards include a small separate block of memory made from Complementary Metal
Oxide Semiconductor Random Access Memory - CMOS RAM chips which is kept alive
by a battery (known as a CMOS battery) even when the PC’s power is off. This prevents
reconfiguration when the PC is powered on.
PC Motherboards Page 27
Chapter 2
BIOS
Another type of computer firmware is the BIOS (basic input/output system). The BIOS is
responsible for informing the processor of the devices present and how to communicate with
them. Whenever the processor makes a request of a component, the BIOS steps in and
translates the request into instructions that the component can understand.
Older computers contained true read-only BIOS that could not be altered. This meant that
one could not add new types of components to the computer because the BIOS would not
know how to communicate with them. Because this seriously limited users’ ability to install a
new type of device not recognized by the older BIOS, flash BIOS was introduced. Now, the
flash BIOS can be electronically upgraded (or flashed) so that it can recognize and
communicate with a new device type.
Plug-and-Play BIOS
All modern PC BIOSes are Plug_and_play, and the CMOS settings program includes some
options for configuring it. One option is Plug and Play Operating System. When enabled, this
setting informs the BIOS that the OS will configure Plug-and-Play devices. Another Plug-
and-Play option allows you to enable or disable the BIOS configuration of Plug-and-Play
devices.
In general, you turn the computer off, insert the manufacturer’s floppy disk, and restart the
computer. The disk contains a program that automatically “flashes” (updates) the BIOS so
that it can recognize different hardware types or perform different functions than it could
before.
Next, the BIOS retrieves the resource settings from the CMOS and assigns them to the
appropriate devices. Then the BIOS processes the remaining CMOS settings, such as the
time or keyboard status. Finally, the BIOS searches for an operating system and hands
PC Motherboards Page 28
Chapter 2
control of the system over to it. The CMOS settings are no longer required at this point, but
the BIOS continues to work, translating communications between the processor and other
components.
Every PC has a BIOS, and you may need to access yours from time to time. Inside the BIOS
you can set a password, manage hardware, and change the boot sequence. The BIOS user
interface is straightforward and easy to access, but you should take caution when exploring
the BIOS--don't change settings if you don't know what they do.
Lab
Viewing System Settings in CMOS
1. Restart your computer, and remain at the keyboard.
2. As the computer starts up, watch for a message at the bottom of the screen telling
you to press a specific key or key combination in order to enter Setup.
3. Press the key or key combination indicated.
4. Spend time viewing the settings, but do not make any changes.
5. When you have finished, use the key combination indicated on the screen for exiting
without saving any changes.
BIOS Setup
Different computer systems have unique methods of accessing BIOS Setup. The BIOS
or Basic Input Output System is a hardware component on the motherboard of a computer
system that holds vital information regarding the system startup. Settings and configuration
of BIOS can be viewed and changed using the BIOS setup utility that can be accessed as
soon as the system starts. A combination of keys has to be pressed to go to the BIOS setup
menu.
Because of the wide variety of computer and BIOS manufacturers over the evolution of
computers, there are numerous ways to enter the BIOS or CMOS Setup. Below is a listing of
the majority of these methods as well as other recommendations for entering the BIOS
setup.
Computers that have been manufactured in the last few years allow you to enter the CMOS
by pressing one of the below five keys during the boot. Usually it's one of the first two.
F1
F2
DEL
ESC
F10
F10 is also often used for the boot menu. If F10 is for the boot menu your computer is likely
to use F2 to enter setup.
PC Motherboards Page 29
Chapter 2
A user will know when to press this key when they see a message similar to the example
below as the computer is booting. Some older computers may also display a flashing block
to indicate when to press the F1 or F2 keys.
Tip: If your computer is new and you are unsure of what key to press when the computer is
booting, try pressing and holding one or more keys the keyboard. This causes a stuck key
error, which may allow you to enter the BIOS setup.
The form factor determines the general layout, size, and feature placement on a
motherboard. Different form factors usually require different style cases. Differences
between form factors can include; physical size and shape, mounting hole location, feature
placement, power supply connectors, and others. Form factor is especially important if you
build your own computer systems and need to ensure that you purchase the correct case
and components.
Note:
The computer case, power supply, and motherboard must all be compatible and fit together
as an interconnecting system. The standards that describe the size, shape, screw hole
positions, and major features of these interconnected components are called form factors.
Using a matching form factor for the motherboard, power supply, and case assures you that:
The motherboard fits in the case.
The power supply cords to the motherboard provide the correct voltage, and the
connectors match the connections on the board.
The holes in the motherboard align with the holes in the case for anchoring the board
to the case.
The holes in the case align with ports coming off the motherboard.
For some form factors, wires for switches and lights on the front of the case match up
with connections on the motherboard.
The holes in the power supply align with holes in the case for anchoring the power
supply to the case.
The two form factors used by most desktop and tower computer cases and power supplies
are the ATX and mini-ATX form factors.
Various form factors of motherboards are AT, Baby AT, ATX, Mini-ATX, Micro-ATX, Flex
ATX, ITX, LPX and Mini LPX and NLX. The most popular motherboard form factors are ATX,
MicroATX, Flex ATX, BTX, and NLX, in that order.
Exam Tip: The form factors examinable by CompTIA are ATX, BTX, MicroATX and NLX
AT & Baby AT
Prior to 1997, IBM computers used large motherboards. After that, however, the size of the
motherboard was reduced and boards using the AT (Advanced Technology) form factor was
released. The AT form factor is found in older computers (386 class or earlier). Some of the
PC Motherboards Page 30
Chapter 2
problems with this form factor mainly arose from the physical size of the board, which is 12"
wide, often causing the board to overlap with space required for the drive bays.
Following the AT form factor, the Baby AT form factor was introduced. With the Baby AT
form factor the width of the motherboard was decreased from 12" to 8.5", limiting problems
associated with overlapping on the drive bays' turf. Baby AT became popular and was
designed for peripheral devices — such as the keyboard, mouse, and video — to be
contained on circuit boards that were connected by way of expansion slots on the
motherboard.
Baby AT was not without problems however. Computer memory itself advanced, and the
Baby AT form factor had memory sockets at the front of the motherboard. As processors
became larger, the Baby AT form factor did not allow for space to use a combination of
processor, heatsink, and fan. The ATX form factor was then designed to overcome these
issues.
Full AT Motherboard
Baby AT
PC Motherboards Page 31
Chapter 2
NLX
Predating the ATX form factor, NLX was an Intel standard for motherboards targeted to the
low-end consumer market. It was an attempt to answer the need for motherboards with more
components built in, including both sound and video. These motherboards became obsolete
very quickly, in part because they were built around a very old expansion bus, the ISA bus,
which had some severe limits that later bus designs overcame.
ATX
With the need for a more integrated form factor which defined standard locations for the
keyboard, mouse, I/O, and video connectors, in 1996 the ATX form factor was introduced by
Intel Corporation. The ATX form factor brought about many chances in the computer. Since
the expansion slots were put onto separate riser cards that plugged into the motherboard,
the overall size of the computer and its case was reduced. The ATX form factor specified
changes to the motherboard, along with the case and power supply. Some of the design
specification improvements of the ATX form factor included a single 20-pin connector for the
power supply, a power supply to blow air into the case instead of out for better air flow, less
overlap between the motherboard and drive bays, and integrated I/O Port connectors
soldered directly onto the motherboard. The ATX form factor was an overall better design for
upgrading.
PC Motherboards Page 32
Chapter 2
Micro-ATX
MicroATX followed the ATX form factor and offered the same benefits but improved the
overall system design costs through a reduction in the physical size of the motherboard. This
was done by reducing the number of I/O slots supported on the board. The microATX form
factor also provided more I/O space at the rear and reduced emissions from using integrated
I/O connectors. A microATX power supply uses a 24-pin P1 connector and is not likely to
have as many extra wires and connectors as those on an ATX power supply.
LPX
While ATX is the most well-known and used form factor, there is also a non-standard
proprietary form factor which falls under the name of LPX, and Mini-LPX. The LPX form
factor is found in low-profile cases (desktop model as opposed to a tower or mini-tower) with
a riser card arrangement for expansion cards where expansion boards run parallel to the
motherboard. While this allows for smaller cases it also limits the number of expansion slots
available. Most LPX motherboards have sound and video integrated onto the motherboard.
While this can make for a low-cost and space saving product they are generally difficult to
repair due to a lack of space and overall non-standardization. The LPX form factor is not
suited to upgrading and offer poor cooling.
PC Motherboards Page 33
Chapter 2
BTX
The BTX, or Balanced Technology Extended form factor, unlike its predecessors is not an
evolution of a previous form factor but a total break away from the popular and dominating
ATX form factor. BTX was developed to take advantage of technologies such as Serial
ATA, USB 2.0, and PCI Express. Changes to the layout with the BTX form factor include
better component placement for back panel I/O controllers and it is smaller than microATX
systems. The BTX form factor provides the industry push to tower size systems with an
increased number of system slots.
One of the most talked about features of the BTX form factor is that it uses in-line airflow. In
the BTX form factor the memory slots and expansion slots have switched places, allowing
the main components (processor, chipset, and graphics controller) to use the same airflow
which reduces the number of fans needed in the system; thereby reducing noise. To assist
in noise reduction BTX system level acoustics have been improved by reduced air
turbulence within the in-line airflow system.
Initially there were three motherboards offered in BTX form factor. The first, picoBTX will
offer four mounting holes and one expansion slot, while microBTX will hold seven mounting
PC Motherboards Page 34
Chapter 2
holes and four expansion slots, and lastly, regularBTX will offer 10 mounting holes and
seven expansion slots. The new BTX form factor design is incompatible with ATX, with the
exception of being able to use an ATX power supply with BTX boards.
Today the industry accepts the ATX form factor as the standard; however legacy AT
systems are still widely in use. Since the BTX form factor design is incompatible with ATX,
only time will tell if it will overtake ATX as the industry standard.
ITX
The ITX line of motherboard form factors was developed by VIA as a low-power, small form
factor (SFF) board for specialty uses, such as home-theater systems and as embedded
components. ITX itself is not an actual form factor but a family of form factors. The family
consists of the following form factors:
Mini-ITX—6.7″ × 6.7″ (170 × 170 mm)
Nano-ITX—4.7″ × 4.7″ (120 × 120 mm)
Pico-ITX—3.9″ × 2.8″ (100 × 72 mm)
Mobile-ITX—2.4″ × 2.4″ (60 × 60 mm)
The mini-ITX motherboard has four mounting holes that line up with three or four of the holes
in the ATX and micro ATX form factors. In mini-ITX boards, the rear interfaces are placed in
the same location as those on the ATX motherboards. These features make mini-ITX boards
compatible with ATX chassis. This is where the mounting compatibility ends because despite
the PC compatibility of the other ITX form factors, they are used in embedded systems, such
as set-top boxes, and lack the requisite mounting and interface specifications.
The illustration below shows the differences in sizes between the different motherboard form
factors.
PC Motherboards Page 35
Chapter 2
Labs.
Installing a Motherboard
Before installing a new motherboard, you should have already installed the CPU, mounted
the CPU cooler and inserted RAM memory (To be done later). This is because it is easier to
install the above parts first before squeezing your motherboard into the cramped quarters of
a case motherboard.
Steps:
1. Install I/O Shield (Metal Back Plate)
PC Motherboards Page 36
Chapter 2
All motherboards should come with a metal plate with cutouts for the back connectors and
ports. This metal plate is called the I/O shield (Input/output shield). The I/O shield snaps into
the back end of computer cases - No screws needed.
Before installing a motherboard in a case, you have to make sure the I/O shield is inserted in
the correct direction. If your I/O shield is the type with tiny protruding bits, they should point
towards the insides of the computer case.
Installing a motherboard: Lower your motherboard slowly into the computer case such that
1. the motherboard holes align with the standoffs on the case
2. the rear motherboard ports line up with the I/O shield
How to install a motherboard: Grab a screwdriver and fasten your motherboard to the
computer case with the screws that come with your computer case (as shown below). For a
motherboard to be properly secured, it should be fastened with at least 4 screws.
PC Motherboards Page 37
Chapter 2
3. Connect Front Panel Connectors to Motherboard
In order for the power button, reset button, front USB ports, front audio ports and LED lights
of the computer case to work, you will need to plug their front panel connectors into the front
panel header on the motherboard.
Manufacturers tend to use abbreviations to label front panel connectors and headers. When
installing a motherboard, it's important that you know what these abbreviations stand for:
H.D.D LED, HD - hard drive LED
POWER LED, PWR LED, PLED, - power LED
POWER SW, PWR SW, PW - power switch
RESET SW, RESET, RES - reset switch
SPEAKER, SPEAK - internal PC speaker
HD AUDIO, AC' 97, F_AUDIO - front panel headphone and microphone jack
USB, F_USB - front panel USB port
PC Motherboards Page 38
Chapter 2
PC Motherboards Page 39
Chapter 2
Revision Questions
Complete the following questions by ticking the correct answer(s) where applicable.
Using the information you learned in this chapter and related to the specifications found in
the figure below, answer the questions that follow.
1. If someone you know were buying this motherboard, what type of case would they
need to purchase?
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
PC Motherboards Page 40
Chapter 2
______________________________________________________________________
6. What do you think that the letters O.C. after some of the memory chips mean in
relationship to this motherboard?
______________________________________________________________________
______________________________________________________________________
7. What can insert into the PS/2 port? (Select the best answer.)
A. mouse
B. keyboard
C. mouse or keyboard
D. display
E. external storage
8. What is the most likely reason this motherboard manufacturer chose to include
two PCI expansion slots?
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
PC Motherboards Page 41
Chapter 3
PROCESSORS
Objectives
Introduction to Processors
How CPU operates
CPU Structure
Factors that affect a CPU performance
CPU Brands
Characteristics of CPU
CPU technology
Processor Modes
CPU Sockets/Slots
CPU Cooling
Labs
Introduction
The processor, also known as a microprocessor and the CPU, can be thought of as the
brains of the system and is responsible for executing software commands and performing
calculation functions. To understand the basic functionality of a CPU, it is best advised to
understand the meaning of the abbreviation — CPU. It stands for Central Processing Unit
and is physically present in the form of a single powerful chip attached to the mainboard of a
PC. A CPU can be thought of as the area of intellect within a PC or its central intelligence.
Every computer function is carried out in the form of complex mathematical codes. It is the
CPU that calculates the accuracy and functionality of these codes and enables a computer
to execute instructions.
There are basically 2 CPU manufacturers today - Intel and AMD. Their processors are not
interchangeable meaning that if you buy an AMD CPU, you must have a motherboard that
supports AMD CPUs and vice versa.
The primary function of all CPUs is to execute various sets of stored instructions that are
commonly referred to as a computer programs.
The CPU is able to do this by following a four-step operational process.
1. Fetch Process: The fetch process is the first operation where the CPU reads an
instruction from the PC’s main memory. Sometimes fetching also includes reading
data from an I/O module.
2. Decode: Decode or the interpretation phase involves decoding of the instruction to
decide what sort of action should be carried out.
3. Execute: This is where the actual desired operation is carried out.
4. Writeback: This is the last CPU process where the data of an executed task is
analyzed and results are written to memory or I/O modules.
CPU Structure
As there are a great many variations in architecture between the different kinds of CPU, we
shall begin my looking at a simplified model of the structure. The model is a good basis on
which to build your knowledge of the workings of a microprocessor.
Memory
The ALU works in conjunction with the register array for many of these executions, in
particular, the accumulator and flag registers. The accumulator holds the results of
operations, while the flag register contains a number of individual bits that are used to store
information about the last operation carried out by the ALU.
Decoder
This is used to decode the instructions that make up a program when they are being
processed, and to determine in what actions must be taken in order to process them.
These decisions are normally taken by looking at the opcode of the instruction,
together with the addressing mode used.
Timer or clock
The timer or clock ensures that all processes and instructions are carried out and
completed at the right time. Pulses are sent to the other areas of the CPU at regular
intervals (related to the processor clock speed), and actions only occur when a pulse
is detected. This ensures that the actions themselves also occur at these same
regular intervals, meaning that the operations of the CPU are synchronised.
Registers
A register is a memory location within the CPU itself, designed to be quickly accessed for
purposes of fast data retrieval. Processors normally contain a register array, which houses
many such registers. The different registers perform specific functions and they include the
program counter, instruction register, accumulator, memory address register and stack
pointer.
The registers contain instructions, data and other values that may need to be quickly
accessed during the execution of a program.
Many different types of registers are common between most microprocessor designs. These
are:
Program Counter (PC)
This register is used to hold the memory address of the next instruction that has to be
executed in a program. This is to ensure that the CPU knows at all times where it has
reached, so that is able to resume following an execution at the correct point, and
that the program is executed correctly.
Control Bus
The control bus carries the signals relating to the control and co-ordination of the
various activities across the computer, which can be sent from the control unit within
the CPU. Different architectures result in differing number of lines of wire within the
control bus, as each line is used to perform a specific task. For instance, different,
specific lines are used for each of read, write and reset requests.
Data Bus
This is used for the exchange of data between the processor, memory and
peripherals, and is bi-directional so that it allows data flow in both directions along the
wires. Again, the number of wires used in the data bus (sometimes known as the
'width') can differ. Each wire is used for the transfer of signals corresponding to a
single bit of binary data. As such, a greater width allows greater amounts of data to
be transferred at the same time.
Address Bus
The address bus contains the connections between the microprocessor and memory
that carry the signals relating to the addresses which the CPU is processing at that
time, such as the locations that the CPU is reading from or writing to. The width of
the address bus corresponds to the maximum addressing capacity of the bus, or the
largest address within memory that the bus can work with. The addresses are
transferred in binary format, with each line of the address bus carrying a single binary
digit. Therefore the maximum address capacity is equal to two to the power of the
number of lines present (2^lines).
Memory
Computer memory provides the primary storage for a computer system. The CPU will
typically have internal memory (embedded in the CPU) used for operations, and external
memory, which is located on the motherboard. The important consideration about memory is
that the control unit is responsible for controlling usage of all memory.
The external memory houses the program being executed, and as such is a crucial part of
the overall structure involved in program execution.
The speed is measured in terms of Hertz, and it is usually lies in the megaHertz (MHz) or
gigaHertz (GHz) range. A megaHertz means that the CPU can process one million
instructions per second whereas a gigahertz CPU has the capability to process one billion
instructions per second. In today technology, all CPUs run in the gigahertz range and you
seldom see CPU with speed in the MHz range anymore.
Theoretically, a 500 MHz CPU is six times slower than a 3 GHz CPU and a 3.6 GHz CPU is
faster than a 3 GHz or a 3.4 GHz CPU. In general, the higher the frequency of a CPU, the
faster the speed of the computer.
Since the hard drive processing speed is much slower than the CPU, data transfer often
takes a long time. To speed thing up, the RAM is used to store temporary information from
the hard drive. Instead of heading straight to the hard drive, the motherboard now checks
and retrieves the data from the RAM. Only when the required information is not found in the
RAM then will the motherboard go to the hard drive.
As CPU speed increased to the point where the RAM is no longer able to catch up, the
transferring of information again become a serious problem. To solve this issue, a cache,
which was effectively a small and extremely fast memory, was added to the processor to
store immediate instruction from the RAM. Since the cache runs at the same speed of the
CPU, it can rapidly provide information to the CPU at the shortest time without any lag.
There are different levels of cache. Level 1 (L1) cache is the most basic form of cache and is
found on every processor. Level 2 (L2) cache has a bigger memory size and is used to store
more immediate instructions. In general, the L1 cache caches the L2 cache which in turn
caches the RAM which in turn caches the hard disk data. With the newer multi-core
technology, there is even a L3 cache that is bigger in size and is shared among the various
cores.
L2/L3 cache plays the greatest part in improving the performance of the processors. The
larger the cache size, the faster the data transfer and the better the CPU performance.
However, cache is very costly. That is why you don’t find 1GB of cache in your system. The
typical cache size is between 512KB to 8MB. The latest Intel Core i7 Extreme Processor
comes with a 12MB L3 cache, which also explains its hefty price tag of approx. $1,000.
3. Multi-Core
In the past, if you wanted to get a faster computer, you had to get a faster CPU. Today, this
is only partially true. The reason being, CPU speed can’t increase forever. There is limitation
as to how fast the transistors can run. When it reaches a plateau, you won’t be able to
increase the speed anymore.
To tackle this problem, CPU manufacturers adopted a multi-core technology, which literally
means putting multiple cores in a CPU chip. While increasing the CPU speed resulted in
faster data calculation, putting more cores in a chip resulted in more work done at the same
time.
CPU Brands
There are many CPU manufacturers, but the prevailing ones in the personal computer
market today are Intel Corporation and AMD (Advanced Micro Devices, Inc.). Intel received
a huge boost when IBM selected their 8088 processor for the original IBM-PC in 1981. For
over a decade AMD produced “clones” of Intel CPUs under a licensing agreement granted to
them at a time when Intel’s manufacturing capacity could not keep up with the demand for
CPUs. Eventually this agreement ended, and, since 1995, AMD has designed and produced
their own CPUs. Both companies manufacture more than CPUs, but their competition in the
Intel
Over time, both manufacturers have released a number of CPU models. Intel’s have ranged
from the Intel 8086 (released in 1978) to the latest generation of the Pentium family of
processors, including those with “Pentium” in their name as well as the Celeron, Xeon, and
Itanium brands. The following subsections discuss the common processors.
The first model included superscalar processing, meaning that it was capable of parallel
processing, so it could process two sets of instructions at the same time. Intel made
numerous advances in each new model, increasing the speed and the number of transistors
per chip.
Pentium processors in general have a 64-bit data bus, meaning that the processor can
receive or transmit 64 bits at a time. The majority of Pentium models to date have a 32-bit
register (internal data bus), which is the on-board storage area in the processor. Newer
models have 64-bit registers.
Many computer professionals base the label “64-bit CPU” on the size of the register.
Therefore, although many Pentium models are called 64-bit, only those with 64-bit registers
actually earn that title. All Pentium CPUs include L1 cache memory and, beginning with the
1997 introduction of the Pentium II, most models included onboard L2 cache of varying
sizes. The Pentium III (1999) brought advances in many areas, including enhanced support
for multimedia and a faster L2 cache.
Early Pentium processors had a 32-bit address bus, for addressing up to 4 GB of memory,
and later Pentium models have a 36-bit address bus, theoretically allowing them to use up to
64 GB of RAM, although this capability has only recently been supported in motherboards.
The Pentium 4 (P4) processor, released in 2000, includes all of the features of the Pentium
III plus a few more. The chip went through a redesign that includes a new architecture called
the NetBurst microarchitecture. Where older Pentiums pretty much topped out at 1 GHz, the
P4 works at much faster speeds and it can exceed speeds of 2 GHz on the desktop. Intel
states that NetBurst allows for future processor speeds of up to 10 GHz.
The P4 also includes a number of new instructions and, interestingly enough, a smaller L1
cache than a Pentium III. You would think this would make the system slower, but, in reality,
system performance improves. The speed increases because the cache refreshes or
updates more efficiently when smaller cache sizes are used. The L2 cache on the original
Pentium 4 is 256 KB, but it operated at a faster speed than its predecessor. Subsequent
Pentium 4 models have larger L2 caches.
Xeon
You can consider the Intel Xeon CPUs as the opposite of the Celerons. The Xeon chips are
high-end CPUs for the server market. Based on the current Pentium model, beginning with
the Pentium II Xeon, these CPUs have several enhancements not found in the same
generation of Pentium. The most compelling for server manufacturers has been the
multiprocessor support that allows identical Xeon chips to work extremely well together.
Server systems utilize sets of two, four, eight, or more of these chips. After dropping
“Pentium” from the name, Xeon models were manufactured with even more improvements,
including 64-bit registers and support for L3 (Level 3) cache on the motherboard.
Itanium
Not a true member of the Pentium family, the Itanium line grew out of collaboration between
Intel and Hewlett-Packard. Designed solely for the high-end server market, Intel introduced
the first Itanium CPU in 2001.Although it is downwardly compatible with operating systems
and code written for the previous Intel (and AMD) CPU architecture, referred to as x86, such
software ran more slowly on the early Itanium than it did on the Intel and AMD x86 CPUs. To
access the full power of the Itanium you must use an operating system and other software
optimized to take advantage of its features.
Subsequent Itanium models, such as the Itanium 2, have greatly improved performance and
better acceptance by server manufacturers. In addition to HP (of course), they include Bull,
Fujitsu, Hitachi, NEC, Silicon Graphics, and Unisys. In July 2006, Intel introduced the dual-
core Itanium CPU, with more power and far less power consumption than previous models.
At this writing, two quad-core designs are in the works.
AMD
Former employees of Fairchild Semiconductor founded Advanced Micro Devices, Inc., in
1969. They manufacture a large variety of products based on integrated circuits. AMD CPU
lines include Athlon, Opteron, Turion 64, Sempron, and Duron. As with Intel, we’ll first look at
older CPU lines before describing the newer lines.
Around the time of the release of the Intel Pentium II processor, AMD released its own sixth-
generation processor, the K6. This processor supported speeds from 166–266 MHz. As with
most processors, the K6 had a 64-bit data bus, a 32-bit register, and a 32-bit address bus. It
also included between 256 KB and 1 MB of L1 cache, but it did not include an on-board L2
cache.
Duron
AMD released the Duron CPU in 1999 and positioned it to compete with the Intel Celeron in
the economy grade PC market. A sixth-generation CPU, the first Duron supported speeds
between 700 and 800 MHz.
Opteron
Introduced as an eighth-generation CPU, this first CPU in the Opteron line has a 40-bit
address bus, and it can run both 32-bit and 64-bit operating systems. It is moving toward a
true 64-bit address bus and has AMD-developed performance enhancements.
CPU Characteristics
Hyperthreading
A thread, or thread of execution, is a portion of a program that can run separately from other
portions of the program. A thread can also run concurrently with other threads.
Hyperthreading, also known as simultaneous multithreading (SMT), is a CPU technology
that allows two threads to execute at the same time within a single execution core. This is
considered to be partially parallel execution. Intel introduced hyperthreading in the Pentium 4
Xeon CPU.
Essentially, when HT Technology is enabled in the system BIOS and the processor is
running a multithreaded application, the processor is emulating two physical processors. The
Pentium 4 was the first desktop processor to support HT Technology, which Intel first
developed for its Xeon workstation and server processor family.
Pentium 4 processors with processor numbers (meaning newer Pentium 4 versions) all
support HT Technology, as do older models with 800MHz FSB and a clock speed of
3.06GHz or higher. HT Technology is not needed (and is therefore not present) in dual-core,
three-core, and quad-core processors because each processor core is capable of handling
separate execution threads in a multithreaded application.
The first dual-core desktop processors were introduced by Intel (Pentium D) and AMD
(Athlon 64 X2) in 2005. Athlon 64 X2’s processor cores communicate directly with each
other, enabling systems running single-core Athlon 64 processors to swap processors after a
simple BIOS upgrade. The Pentium D, on the other hand, required new chipsets to support
it. The Core 2 Duo is Intel’s current dual-core processor, and, like the AMD Athlon 64 X2, the
Core 2 Duo’s processor cores communicate directly with each other.
Both Intel and AMD have released processors that include more than two cores. Intel’s Core
2 Quad and some versions of the Core 2 Extreme contain four processor cores, and AMD’s
Phenom x4 contains four processor cores and the Phenom x3 contains three.
Components are sold to consumers with a maximum clock rate, but they don’t always run at
that maximum number. To explain, let me use a car analogy. The CPU is often called the
“engine” of the computer, like a car engine. Well, your car’s speedometer might go up to
120MPH, but you’ll probably never drive at that maximum—for a variety of reasons! When it
comes to CPUs, the stated clock rate is the maximum clock rate, and the CPU usually runs
at a speed less than that; in fact, it can run at any speed below the maximum.
Now, we’re all familiar with speeds such as 2.4GHz, 3.0GHz, or 3.2GHz. But what is the
basis of these speeds? Speed can be broken down into three categories that are
interrelated:
• Motherboard clock speed—The base clock speed of the motherboard. Also referred
to as the system bus speed, this speed is generated by a quartz oscillating crystal
soldered directly to the motherboard. For example, the base clock speed of a typical
motherboard might be 333MHz.
• External clock speed—This is the speed of the front side bus (FSB), which connects
the CPU to the memory controller hub (Northbridge) on the motherboard. This is usually
variable and depends on the CPU you install. In addition, it is determined from the base
clock speed of the motherboard. For example, a typical motherboard’s maximum
external clock speed (or FSB) could be 1333MHz. simply put, this means that it is
transferring four times the amount of data per cycle as compared to the original base
clock speed.
333MHz × 4 = 1,333MHz.
• Internal clock speed—This is the internal speed of the CPU. As an example, the Intel
Q8400 CPU is rated at 2.66GHz. The CPU uses an internal multiplier that is also based
off the motherboard base clock. The multiplier for this CPU is eight. The math is as
follows: base clock speed × multiplier = internal clock speed.
A motherboard can possibly support faster CPUs also; for example, the Intel Q9650 that has
an internal clock speed of 3.00GHz. This means that it has a multiplier of nine (3.00GHz
/333MHz = 9). Some motherboards allow for overclocking, which enables the user to
increase the multiplier within the BIOS, thereby increasing the internal clock speed of the
CPU. This could possibly cause damage to the system, analogous to blowing the engine of a
car when attempting to run a 10-second one-fourth mile. So approach overclocking with
caution.
Throttling
A major problem arises as processor, memory, and other motherboard components become
more powerful and smaller - overheating. This occurs despite efforts to mitigate this problem
with heat sinks, fans, and other cooling techniques built into the design of the chips
themselves.
Throttling can also take place when processors do not need to run at full speed when they
have little, or no, work to perform. By slowing down—or throttling—the processor’s clock
speed when the workload is light, the processor runs cooler, the system uses less energy,
and—in the case of mobile systems—the computer enjoys a longer battery life.
Overclocking
Overclocking is the practice of forcing a CPU or other computer component to run at a
higher clock rate than the manufacturer intended. PC hobbyist and gamers often overclock
their systems to get more performance out of CPUs, video cards, chipsets, and RAM. The
downside to this practice is that overclocking produces more heat and can cause damage to
the motherboard, CPU, and other chips, which may explode and/or burst into flames.
All Intel and AMD processors in current use include various ways of boosting multimedia
performance. The first processor to include this was the Pentium MMX, which included 57
new instructions (known as MMX) for working with multimedia. According to Intel, MMX
doesn’t stand for anything, but it has commonly become known as MultiMedia eXtensions.
MMX was the first example of what is known as single instruction, multiple data (SIMD)
capability.
Later Intel processors included enhanced versions of MMX known as SSE (MMX+70
additional instructions, introduced with the Pentium III), SSE2 (MMX+SSE+144 new
instructions, introduced with the Pentium 4), SSE3 (MMX+SSE+SSE2+13 new instructions,
introduced with the Pentium 4 Prescott), and, most recently, SSSE3
(MMX+SSE+SSE2+SSE3+32 new instructions, introduced with the Core 2 Duo).
AMD also provides multimedia-optimized instruction sets in its processors, starting with
3DNow! (introduced by the K6, which was roughly equivalent to the Pentium MMX).
However, AMD’s version differs in details from Intel’s, offering 21 new instructions. The AMD
Athlon introduced 3DNow! Enhanced (3DNow!+24 new instructions), and the Athlon XP
introduced 3DNow! Professional (3DNow!+Enhanced+51). 3DNow! Professional is
equivalent to Intel’s SSE. Starting with the Athlon 64 family, AMD now supports SSE2, and
added SSE3 support to the Athlon 64 X2 and newer versions of the Athlon 64 family.
VRM
Starting with Socket 7 versions of the Intel Pentium, processors have not received their
power directly from the power supply. Instead, a device called a voltage regulator module
(VRM) has been used to reduce 5V or 12V DC power from the power supply to the
appropriate power requested by the processor through its voltage identification (VID) logic.
Although some motherboards feature a removable VRM, most motherboards use a built-in
VRM that is located next to the processor socket.
The Athlon 64 was the first desktop processor to support 64-bit extensions to the 32-bit x86
architecture. These 64-bit extensions, commonly known as x64, enable processors to use
more than 4GB of RAM and run 64-bit operating systems, but maintain full compatibility with
32-bit operating systems and applications.
Late-model Pentium 4 processors from Intel also support x64, as do subsequent processors
such as the Pentium 4 Extreme Edition, Pentium D, Pentium Extreme Edition, Core 2 Duo,
Core 2 Quad, Core 2 Extreme, Phenom x4, and Phenom x2. Most processors made today
support x64 operation.
CPU Technology
Years of development have been undertaken into improving the architecture of the central
processing unit, with the main aim of improving performance. Two competing architectures
were developed for this purpose, and different processors conformed to each one. Both had
their strengths and weaknesses, and as such also had supporters and detractors.
Performance was improved here by allowing the simplification of program compilers, as the
range of more advanced instructions available led to fewer refinements having to be made at
the compilation process. However, the complexity of the processor hardware and
architecture that resulted can cause such chips to be difficult to understand and program for,
and also means they can be expensive to produce.
Changing the architecture to this extent means that fewer transistors are used to produce
the processors. This means that RISC chips are much cheaper to produce than their CISC
counterparts. Also the reduced instruction set means that the processor can execute the
instructions more quickly, potentially allowing for greater speeds. However, only allowing
such simple instructions means a greater burden is placed upon the software itself. Less
instructions in the instruction set means a greater emphasis on the efficient writing of
software with the instructions that are available. Supporters of the CISC architecture will
point out that their processors are of good enough performance and cost to make such
efforts not worth the trouble.
CISC RISC
Large (100 to 300) Instruction Set Small (100 or less)
Complex (8 to 20) Addressing Modes Simple (4 or less)
Specialised Instruction Format Simple
Variable Code Lengths Fixed
Variable Execution Cycles Standard for most
Higher Cost / CPU Complexity Lower
Processor Modes
All Intel and Intel-compatible processors from the 386 on up can run in several modes.
Processor modes refer to the various operating environments and affect the instructions and
capabilities of the chip. The processor mode controls how the processor sees and manages
the system memory and the tasks that use it.
Processor Modes
Later processors such as the 286 could run the same 16-bit instructions as the original 8088,
but much faster. In other words, the 286 was fully compatible with the original 8088 and
could run all 16-bit software just the same as an 8088, but, of course, that software would
run faster. The 16-bit instruction mode of the 8088 and 286 processors has become known
as real mode. All software running in real mode must use only 16-bit instructions and live
within the 20-bit (1MB) memory architecture it supports. Software of this type is usually
single-tasking—that is, only one program can run at a time. No built-in protection exists to
keep one program from overwriting another program or even the OS in memory. Therefore,
if more than one program is running, one of them could bring the entire system to a crashing
halt.
Knowing that new OSs and applications—which take advantage of the 32-bit protected
mode—would take some time to develop, Intel wisely built a backward-compatible real mode
into the 386. That enabled it to run unmodified 16-bit OSs and applications. It ran them quite
well—much more quickly than any previous chip. For most people, that was enough. They
did not necessarily want new 32-bit software; they just wanted their existing 16-bit software
to run more quickly. Unfortunately, that meant the chip was never running in the 32-bit
protected mode, and all the features of that capability were being ignored.
When a 386 or later processor is running DOS (real mode), it acts like a “Turbo 8088,” which
means the processor has the advantage of speed in running any 16-bit programs; it
otherwise can use only the 16-bit instructions and access memory within the same 1MB
memory map of the original 8088. Therefore, if you have a system with a current 32-bit or
64-bit processor running Windows 3.x or DOS, you are effectively using only the first
megabyte of memory, leaving the other entire RAM largely unused!
New OSs and applications that ran in the 32-bit protected mode of the modern processors
were needed. Being stubborn, we as users resisted all the initial attempts at being switched
over to a 32-bit environment. People are resistant to change and are sometimes more
content with running older software more quickly than with adopting new software with new
features. I’ll be the first one to admit that I was (and still am) one of those stubborn users
myself!
Because of this resistance, true 32-bit OSs took quite a while before getting a mainstream
share in the PC marketplace. Windows XP was the first true 32-bit OS that became a true
Note that any program running in a virtual real mode window can access up to only 1MB of
memory, which that program will believe is the first and only megabyte of memory in the
system. In other words, if you run a DOS application in a virtual real window, it will have a
640KB limitation on memory usage. That is because there is only 1MB of total RAM in a 16-
bit environment, and the upper 384KB is reserved for system use. The virtual real window
fully emulates an 8088 environment, so that aside from speed, the software runs as if it were
on an original real mode–only PC. Each virtual machine gets its own 1MB address space, an
image of the real hardware basic input/output system (BIOS) routines, and emulation of all
other registers and features found in real mode.
Virtual real mode is used when you use a DOS window to run a DOS or Windows 3.x 16-bit
program. When you start a DOS application, Windows creates a virtual DOS machine under
which it can run. One interesting thing to note is that all Intel and Intel-compatible (such as
AMD and VIA/Cyrix) processors power up in real mode. If you load a 32-bit OS, it
automatically switches the processor into 32-bit mode and takes control from there.
It’s also important to note that some 16-bit (DOS and Windows 3.x) applications misbehave
in a 32-bit environment, which means they do things that even virtual real mode does not
support.
Diagnostics software is a perfect example of this. Such software does not run properly in a
real mode (virtual real) window under Windows. In that case, you can still run your modern
system in the original no-frills real mode by booting to a DOS or Windows 9x/Me startup
floppy.
Although 16-bit DOS and “standard” DOS applications use real mode, special programs are
available that “extend” DOS and allow access to extended memory (over 1MB). These are
sometimes called DOS extenders and usually are included as part of any DOS or Windows
3.x software that uses them. The protocol that describes how to make DOS work in
protected mode is called DOS protected mode interface (DPMI).
Windows 3.x used DPMI to access extended memory for use with Windows 3.x applications.
It allowed these programs to use more memory even though they were still 16-bit programs.
DOS extenders are especially popular in DOS games because they enable them to access
much more of the system memory than the standard 1MB that most real mode programs can
address. These DOS extenders work by switching the processor in and out of real mode. In
the case of those that run under Windows, they use the DPMI interface built into Windows,
enabling them to share a portion of the system’s extended memory.
Another exception in real mode is that the first 64KB of extended memory is actually
accessible to the PC in real mode, despite the fact that it’s not supposed to be possible. This
is the result of a bug in the original IBM AT with respect to the 21st memory address line,
known as A20 (A0 is the first address line). By manipulating the A20 line, real mode software
Processors with 64-bit extension technology can run in real (8086) mode, IA-32 mode, or IA-
32e mode. IA-32 mode enables the processor to run in protected mode and virtual real
mode. IA-32e mode allows the processor to run in 64-bit mode and compatibility mode,
which means you can run both 64-bit and 32-bit applications simultaneously. IA-32e mode
includes two sub modes:
• 64-bit mode—Enables a 64-bit OS to run 64-bit applications
• Compatibility mode—Enables a 64-bit OS to run most existing 32-bit software
IA-32e 64-bit mode is enabled by loading a 64-bit OS and is used by 64-bit applications. In
the 64-bit sub mode, the following new features are available:
• 64-bit linear memory addressing
• Physical memory support beyond 4GB (limited by the specific processor)
• Eight new general-purpose registers (GPRs)
• Eight new registers for streaming SIMD extensions (MMX, SSE, SSE2, and SSE3)
• 64-bit-wide GPRs and instruction pointers
IE-32e compatibility mode enables 32-bit and 16-bit applications to run under a 64-bit OS.
Unfortunately, legacy 16-bit programs that run in virtual real mode (that is, DOS programs)
are not supported and will not run, which is likely to be the biggest problem for many users.
Similar to 64-bit mode, compatibility mode is enabled by the OS on an individual code basis,
which means 64-bit applications running in 64-bit mode can operate simultaneously with 32-
bit applications running in compatibility mode.
What we need to make all this work is a 64-bit OS and, more importantly, 64-bit drivers for
all our hardware to work under that OS. Although Microsoft released a 64-bit version of
Windows XP, few companies released 64-bit XP drivers. It wasn’t until Windows Vista and
especially Windows 7 x64 versions were released that 64-bit drivers became plentiful
enough that 64-bit hardware support was considered mainstream.
Note that Microsoft uses the term x64 to refer to processors that support either AMD64 or
EM64T because AMD and Intel’s extensions to the standard IA32 architecture are practically
identical and can be supported with a single version of Windows.
The physical memory limits for Windows XP and later 32-bit and 64-bit editions are shown
below.
The major difference between 32-bit and 64-bit Windows is memory support—specifically,
breaking the 4GB barrier found in 32-bit Windows systems. 32-bit versions of Windows
support up to 4GB of physical memory, with up to 2GB of dedicated memory per process.
64-bit versions of Windows support up to 192GB of physical memory, with up to 4GB for
each 32-bit process and up to 8TB for each 64-bit process. Support for more memory means
applications can preload more data into memory, which the processor can access much
more quickly.
Note
Although 32-bit versions of Windows can support up to 4GB of RAM, applications cannot
access more than about 3.25GB of RAM. The remainder of the address space is used by
video cards, the system ROM, integrated PCI devices, PCI cards, and APICs.
64-bit Windows runs 32-bit Windows applications with no problems, but it does not run 16-bit
Windows, DOS applications, or any other programs that run in virtual real mode. Drivers are
another big problem. 32-bit processes cannot load 64-bit dynamic link libraries (DLLs), and
64-bit processes cannot load 32-bit DLLs. This essentially means that, for all the devices you
have connected to your system, you need both 32-bit and 64-bit drivers for them to work.
Acquiring 64-bit drivers for older devices or devices that are no longer supported can be
difficult or impossible. Before installing a 64-bit version of Windows, be sure to check with
the vendors of your internal and add-on hardware for 64-bit drivers.
Tip
If you cannot find 64-bit drivers designed for Windows Vista or Windows 7, look for 64-bit
drivers for Windows XP x64 edition. These drivers often work very well with later 64-bit
versions of Windows.
Although vendors have ramped up their development of 64-bit software and drivers, you
should still keep all the memory size, software, and driver issues in mind when considering
the transition from 32-bit to 64-bit technology. The transition from 32-bit hardware to
mainstream 32-bit computing took 16 years. The first 64-bit PC processor was released in
2003, and 64-bit computing really didn’t become mainstream until the release of Windows 7
in late 2009.
CPU Sockets/Slots
Intel and AMD have created a set of socket and design designs for their processors. Each
socket and slot is designed to support a different range of original and upgrade processors.
[1]
Non-overdrive DX4 or AMD 5x86 can also be supported with the addition of an
aftermarket 3.3v voltage-regulator adapter.
[2]
Socket 6 was a paper standard only and was never actually implemented in any systems.
Sockets 1, 2, 3, and 6 are 486 processor sockets and are shown together so you can see
the overall size comparisons and pin arrangements between these sockets. Sockets 4, 5, 7,
and 8 are Pentium and Pentium Pro processor sockets and are shown together so you can
see the overall size comparisons and pin arrangements between these sockets.
Sockets and processors use different methods to make the contacts between them. Here is
a list of the more important methods:
i. A pin grid array (PGA) socket has holes aligned in uniform rows around the socket
to receive the pins on the bottom of the processor. Early Intel processors used PGA
sockets, but they caused problems because the small delicate pins on the processor
were easily bent as the processor was installed in the socket. Some newer Intel
mobile processors, including the Second Generation Core i3, Core i5, and Core i7
processors use the PGA988 socket or the FCPGA988 socket in laptops.
ii. A land grid array (LGA) socket has blunt protruding pins on the socket that connect
with lands or pads on the bottom of the processor. The first LGA socket was the
LGA775 socket. It has 775 pins and is shown with the socket lever and top open in
Figure 4-6. Another LGA socket is the LGA1366 shown in Figure 4-7. LGA sockets
generally give better contacts than PGA sockets, and the processor doesn’t have the
delicate pins so easily damaged during an installation.
iii. Some sockets can handle a processor using a flip-chip land grid array (FCLGA)
processor package or a flip chip pin grid array (FCPGA) package. The chip is
flipped over so that the top of the chip is on the bottom and makes contact with the
socket. The LGA1155 socket has a flip chip version, which is called the FCLGA1155
socket. The two sockets are not compatible.
iv. A staggered pin grid array (SPGA) socket has pins staggered over the socket to
squeeze more pins into a small space.
v. A ball grid array (BGA) connection is not really a socket. The processor is soldered
to the motherboard, and the two are always purchased as a unit. For example, the
little Atom processors often use this technology with a Mini-ITX motherboard in low-
end computers or home theater systems.
Read the motherboard and CPU documentation very carefully to be sure that the CPU and
socket match, because as it is evident in the table above, there are many versions of PGA
sockets. Furthermore, the CPU socket on the motherboard will usually have a mechanism to
make it easier to install the CPU without damaging pins. The most common method involves
a lever on the side of the socket that you raise to open the socket.
To use this type of socket, commonly called a zero insertion force (ZIF) socket, position the
CPU with all pins inserted in the matching socket holes. Then close the lever, which lets the
socket contact each of the CPU’s pins. In all cases, do not count on these simple
instructions, but follow those provided in the manuals that come with the motherboard and
CPU.
A+ Exam Tip: The A+ 220-801 exam expects you to know about Intel LGA sockets,
including the 775, 1155, 1156, and 1366 LGA sockets.
Socket 478
Socket 478 is a ZIF-type socket for the Pentium 4 and Celeron 4 (Celerons based on the
Pentium 4 core) introduced in October 2001. It was specially designed to support additional
pins for future Pentium 4 processors and speeds over 2GHz. The heatsink mounting is
different from the previous Socket 423, allowing larger heat sinks to be attached to the CPU.
Socket 478 supports a 400MHz, 533MHz, or 800MHz processor bus that connects the
processor to the MCH, which is the main part of the motherboard chipset. Socket 478 uses a
heatsink attachment method that clips the heatsink directly to the motherboard, and not the
CPU socket or chassis (as with Socket 423). Therefore, any standard chassis can be used,
and the special standoffs used by Socket 423 boards are not required. This heatsink
attachment allows for a much greater clamping load between the heatsink and processor,
which aids cooling.
Socket 478 processors use five VID pins to signal the VRM built into the motherboard to
deliver the correct voltage for the particular CPU you install. This makes the voltage
Socket LGA775
Socket LGA775 (also called Socket T) is used by the Core 2 Duo/Quad processors, the
latest versions of the Intel Pentium 4 Prescott processor and the Pentium D and Pentium
Extreme Edition processors. Some versions of the Celeron and Celeron D also use Socket
LGA775. Socket LGA775, unlike earlier Intel processor sockets, uses a land grid array
format, so the pins are on the socket, rather than the processor.
LGA uses gold pads (called lands) on the bottom of the processor to replace the pins used in
PGA packages. It allows for much greater clamping forces via a load plate with a locking
lever, with greater stability and improved thermal transfer (better cooling). The first LGA
processors were the Pentium II and Celeron processors in 1997; in those processors, an
LGA chip was soldered on the Slot-1 cartridge. LGA is a recycled version of what was
previously called leadless chip carrier (LCC) packaging. This was used way back on the 286
processor in 1984, and it had gold lands around the edge only. (There were far fewer pins
back then.) In other ways, LGA is simply a modified version of ball grid array (BGA), with
gold lands replacing the solder balls, making it more suitable for socketed (rather than
soldered) applications.
Socket LGA1155
Socket LGA1155 (also known as Socket H2) was introduced in January 2011 to support
Intel’s Sandy Bridge (second-generation) Core i Series processors, which now include Turbo
Boost overclocking. Socket LGA1155 uses a land grid array format, so the pins are on the
socket, rather than the processor. Socket LGA1155 uses the same cover plate as Socket
1156, but is not interchangeable with it.
Socket LGA1156
Socket LGA1156 (also known as Socket H) was introduced in September 2009 and was
designed to support Intel Core i Series processors featuring an integrated chipset north
bridge, including a dual-channel DDR3 memory controller and optional integrated graphics.
Socket LGA1156 uses a land grid array format, so the pins are on the socket, rather than the
processor.
Because the processor includes the chipset north bridge, Socket LGA1156 is designed to
interface between a processor and a Platform Controller Hub (PCH), which is the new name
used for the south bridge component in supporting 5x series chipsets. The LGA1156
interface includes the following:
• PCI Express x16 v2.0—For connection to either a single PCIe x16 slot, or two
PCIe x8 slots supporting video cards.
• DMI (Direct Media Interface)—For data transfer between the processor and the
PCH. DMI in this case is essentially a modified PCI Express x4 v2.0 connection, with
a bandwidth of 2GBps.
When processors with integrated graphics are used, the Flexible Display Interface carries
digital display data from the GPU in the processor to the display interface circuitry in the
PCH. Depending on the motherboard, the display interface can support DisplayPort, High
Definition Multimedia Interface (HDMI), Digital Visual Interface (DVI), or Video Graphics
Array (VGA) connectors.
Socket LGA1366
Socket LGA1366 (also known as Socket B) was introduced in November 2008 to support
high-end Intel Core i Series processors, including an integrated triple-channel DDR3
memory controller, but which also requires an external chipset north bridge, in this case
called an I/O Hub (IOH). Socket LGA1366 uses a land grid array format, so the pins are on
the socket, rather than the processor.
Socket LGA1366 is designed to interface between a processor and an IOH, which is the new
name used for the north bridge component in supporting 5x series chipsets. The LGA1366
interface includes the following:
• QPI (Quick Path Interconnect)—For data transfer between the processor and the
IOH. QPI transfers 2 bytes per cycle at either 4.8 or 6.4GHz, resulting in a bandwidth
of 9.6 or 12.8GBps.
• DDR3 triple-channel—For direct connection between the memory controller
integrated into the processor and DDR3 SDRAM modules in a triple-channel
configuration.
LGA1366 is designed for high-end PC, workstation, or server use. It supports configurations
with multiple processors.
Socket 940 is used with the Socket 940 version of the AMD Athlon 64 FX, as well as most
AMD Opteron processors. Motherboards using this socket support only registered DDR
SDRAM modules in dual-channel mode. Because the pin arrangement is different, Socket
939 processors do not work in Socket 940, and vice versa.
Socket 939: The cutout corner and Socket 940: The cutout corner and
triangle at the lower left indicate triangle at the lower left indicate
pin 1. pin 1.
Socket AM2/AM2+/AM3/AM3+
In May 2006, AMD introduced processors that use a new socket, called Socket. AM2 was
the first replacement for the confusing array of Socket 754, Socket 939, and Socket 940 form
factors for the Athlon 64, Athlon 64 FX, and Athlon 64 X2 processors.
Socket
AM2/AM2+.
The arrow
(triangle) at
the lower left
indicates
pin1.
Socket AM2+ is an upgrade to Socket AM2 that was released in November 2007. Although
Sockets AM2 and AM2+ are physically the same, Socket AM2+ adds support for split power
planes and HyperTransport 3.0, allowing for FSB speeds of up to 2.6GHz. Socket AM2+
chips are backward compatible with Socket AM2 motherboards, but only at reduced
HyperTransport 2.0 FSB speeds. Socket AM2 processors can technically work in Socket
AM2+ motherboards; however, this also requires BIOS support, which is not present in all
motherboards.
Socket AM3 was introduced in February 2009, primarily to support processors with
integrated DDR3 memory controllers such as the Phenom II. Besides adding support for
DDR3 memory, Socket AM3 has 941 pins in a modified key pin configuration that physically
prevents Socket AM2 or AM2+ processors from being inserted.
Socket AM3.
The arrow
(triangle) at the
lower left
indicates pin 1.
Socket F (1207FX)
Socket F (also called 1207FX) was introduced by AMD in August 2006 for its Opteron line of
server processors. Socket F is AMD’s first land grid array (LGA) socket, similar to Intel’s
Socket LGA775. It features 1,207 pins in a 35-by-35 grid, with the pins in the socket instead
of on the processor. Socket F normally appears on motherboards in pairs because it is
Processor Cooling
CPUs produce heat, and the more powerful the CPU the more heat it produces. Heat is an
enemy to the PC in general as it causes problems such as random reboots. Methods of
cooling the CPU and in turn the overall interior of the case have evolved with the increasing
need to remove this heat. Options that are used are covered in this section.
In methods of cooling, technology that transfers heat away from components uses
thermoelectric cooling and components that perform this function are called Peltier
components. Heat sinks, cooling fans, and cooling fins are Peltier components. Liquid
cooling, on the other hand, cools not by transferring heat away from components but by
circulating a cool liquid around them.
Heat sink
The cooling can be either active or passive. A passive heat sink is a block of heat-conductive
material that sits close to the CPU and wicks away the heat into the air. An active heat sink
contains a fan that pulls the hot air away from the CPU. The heat sink sits atop the CPU, in
many cases obscuring it from view entirely.
Fans
Active heat sinks have a fan that sits atop the heat sink. It pulls the heat out of the heat sink
and away from it. Then the case fan shunts the heat out the back or side of the case.
Thermal paste
Most passive heat sinks are attached to the CPU using a glue-like thermal compound (called
thermal glue, thermal compound, or thermal paste). This makes the connection between the
heat sink and the CPU more seamless and direct. Thermal compound can be used on active
heat sinks too, but generally it isn’t because of the possibility that the fan may stop working
and need to be replaced. Thermal compound improves thermal transfer by eliminating tiny
air pockets between the heat sink and CPU (or other device like a north bridge or video
chipset). Thermal compound provides both improved thermal transfer and adds bonding for
heat sinks when there are no mounting holes to clamp the heat sink to the device to be
cooled.
Liquid-based
Liquid-cooled cases are available that use circulating water rather than fans to keep
components cool. These cases are typically more expensive than standard ones and may be
more difficult for a technician untrained in this technology to work on, but they result in an
almost completely silent system.
Issues with liquid-cooled machines can include problems with hoses or fittings, the pump, or
the coolant. A failure of the pump can keep the liquid from flowing and cause the system to
overheat. A liquid-cooled system should also be checked every so often for leaks or
corrosion on the hoses and fittings, and the reservoir should be examined to make sure it is
full and does not contain contaminants. Liquid cooling is more expensive, less noisy, and
more efficient than Peltier components.
The cooler is bracketed to the motherboard using a wire or plastic clip. A creamlike thermal
compound is placed between the bottom of the cooler heatsink and the top of the processor.
This compound eliminates air pockets, helping to draw heat off the processor. The thermal
compound transmits heat better than air and makes an airtight connection between the fan
and the processor. When processors and coolers are boxed together, the cooler heatsink
might have thermal compound already stuck to the bottom.
Pre-applied
thermal
compound
Active Heatsinks
To ensure a constant flow of air and more consistent performance, most heatsinks
incorporate fans so they don’t have to rely on the airflow within the system. Heatsinks with
fans are referred to as active heatsinks (see Figure below). Active heatsinks have a power
connection. Older ones often used a spare disk drive power connector, but most recent
heatsinks plug into dedicated heatsink power connections common to most motherboards.
The Socket 478 design uses two cams to engage the heatsink clips and place the system
under tension. The force generated is 75 lbs., which produces a noticeable bow in the
motherboard underneath the processor. This bow is normal, and the motherboard is
designed to accommodate it. The high degree of force is necessary to prevent the heavier
heatsinks from pulling up on the processor during movement or shipment of the system, and
it ensures a good bond for the thermal interface material (thermal grease).
The figure below shows the design used on most Socket AM3+, AM3, AM2+, AM2,940, 939,
and 754 processors, featuring a cam and clip assembly on one side. Similar to the Socket
478 double-cam assembly, this design puts 75 lbs. of force between the heatsink and the
processor. Bowing of the motherboard is prevented in this design by the use of a special
stiffening plate (also called a backing plate) underneath the motherboard. The heatsink
retention frame actually attaches to this plate through the board. The stiffening plate and
retention frame normally come with the motherboard, but the heatsink with fan and the clip
and cam assembly come with the processor.
Recently, however, Intel has recommended using a liquid cooling system with its processors
that use the LGA2011 socket on a motherboard. Liquid cooling systems tend to run quieter
than other cooling methods. They might include a PCI card that has a power supply,
temperature sensor, and processor to control the cooler. Using liquid cooling, a small pump
sits inside the computer case, and tubes move liquid around components and then away
from them to a place where fans can cool the liquid, similar to how a car radiator works. The
figure below shows one liquid cooling system where the liquid is cooled by fans sitting inside
a large case. Sometimes, however, the liquid is pumped outside the case, where it is cooled.
Labs
When replacing a processor in an existing system, power down the system, unplug the
power cord, press the power button to drain the system of power, and open the case. Follow
these steps to install the processor and cooler using socket LGA1155.
1. Read all directions in the motherboard user guide and carefully follow them in order.
2. Use a ground bracelet or antistatic gloves to protect the processor, motherboard, and
other components against ESD.
3. Open the socket by pushing down on the socket lever and gently pushing it away
from the socket to lift the lever.
4. As you fully open the socket lever, the socket load plate opens.
5. Remove the socket protective cover .Keep this cover in a safe place. If you ever
remove the processor, put the cover back in the socket to protect the socket. While
the socket is exposed, be very careful to not touch the pins in the socket.
6. Remove the protective cover from the processor. You can see the processor in this
clear plastic cover on the right side of the figure below, which also shows the open
socket. While the processor contacts are exposed, take extreme care to not touch
the bottom of the processor. Hold it only at its edges. (It’s best to use antistatic
gloves as you work, but the gloves make it diffi cult to handle the processor.) Put the
processor cover in a safe place and use it to protect the processor if you ever remove
the processor from the socket.
7. Hold the processor with your index fi nger and thumb and orient the processor so that
the gold triangle on the corner of the processor lines up with the right-angle mark
embedded on the motherboard just outside a corner of the socket. Gently lower the
processor straight down into the socket. Don’t allow the processor to tilt, slide, or shift
as you put it in the socket. To protect the pads, it needs to go straight down into the
socket.
Gold
triangle
Right-angle
mark
8. Check carefully to make sure the processor is aligned correctly in the socket. Closing
the socket without the processor fully seated can destroy the socket. The figure
below shows the processor fully seated in the socket. Close the socket load plate so
that it catches under the screw head at the front of the socket.
Socket
screw
9. Push down on the lever and gently return it to its locked position.
Installing a cooler
Before installing a cooler, read the directions carefully and make sure you understand them.
Clips that hold the fan and heat sink to the processor frame or housing are sometimes
difficult to install. Follow these general steps:
1. The motherboard has four holes to anchor the cooler. You can see them labeled in
Figure 5-17. Examine the cooler posts that fi t over these holes and the clips, screws,
or wires that will hold the cooler firmly in place. Make sure you understand how this
mechanism works.
2. If the cooler has thermal compound preapplied, remove the plastic from the
compound. If the cooler does not have thermal compound applied, put a small dot of
compound (about the size of a small pea) in the center of the processor (see Figure
5-17). When the cooler is attached and the processor is running, the compound
spreads over the surface. Don’t use too much—just enough to later create a thin
layer. If you use too much compound, it can slide off the housing and damage the
processor or circuits on the motherboard. To get just the right amount, you can buy
individual packets that each contain a single application of the thermal compound.
Four
holes to
attach
cooler
3. Verify the locking pins on the cooler are turned as far as they will go in a
counterclockwise direction. (Make sure the pins don’t protrude into the hollow plastic
posts that go down into the motherboard holes.) Align the cooler over the processor
so that all four posts fi t into the four holes on the motherboard and the fan power
cord can reach the fan header on the motherboard.
4. Push down on each locking pin until you hear it pop into the hole. To help keep the
cooler balanced and in position, push down two opposite pins and then push the
remaining two pins in place. Using a flathead screwdriver, turn the locking pin
clockwise to secure it. (Later, if you need to remove the cooler, turn each locking pin
counterclockwise to release it from the hole.)
Note: If you later notice the CPU fan is running far too often, you might need to
tighten the connection between the cooler and the processor.
5. Connect the power cord from the cooler fan to the motherboard power connector
near the processor, as shown below.
After the processor and cooler are installed and the motherboard is installed in the case,
make sure cables and cords don’t obstruct fans or airflow, especially airflow around the
processor and video card. Use cable ties to tie cords and cables up and out of the way.
Make one last check to verify all power connectors are in place and other cords and cables
connected to the motherboard are correctly done. You are now ready to plug back up the
system, turn it on, and verify all is working. If the power comes on (you hear the fan spinning
and see lights), but the system fails to work, most likely the processor is not seated solidly in
the socket or some power cord has not yet been connected or is not solidly connected.
Turn everything off, unplug the power cord, press the power button to drain power, open the
case, and recheck your installation.
If the system comes up and begins the boot process, but suddenly turns off before the boot
is complete, most likely the processor is overheating because the cooler is not installed
correctly. Turn everything off, unplug the power cord, press the power button to drain power,
open the case, and verify the cooler is securely seated and connected.
After the system is up and running, you can check BIOS setup to verify that the system
recognized the processor correctly.
Exam Essentials
Identify the CPU socket types you may encounter. These include but are not limited to Intel
LGA, 775, 1155, 1156, and 1366, as well as AMD 940, AM2, AM2+, AM3, AM3+, FM1, and
F.
Define the characteristic of CPUs. CPUs can differ based on speed, number of cores, cache
size and type, hyperthreading support, virtualization support, architecture (32-bit vs. 64-bit),
and integrated GPU support.
Understand the various options available to reduce CPU heat. These options include heat
sinks, fans, thermal paste, and liquid cooling.
1. What processor socket does your system motherboard use? How did you find this
information?
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
2. Identify the currently installed processor, including its brand, model, speed, and
other important characteristics. How did you find your information?
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
3. List three or more processors the board supports according to the motherboard
documentation or web site.
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
4. Search the web for three or more processors that would match this board. Save or
print three web pages showing the details and prices of a high-performing,
moderately performing, and low-performing processor the board supports.
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
………….
5. If your current processor fails, which processor would you recommend for this
system?
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
Explain your recommendation
………………………………………………………………………………………………………
………………………………………………………………………………………………………
………………………………………………………………………………………………………
……………………………………………………………………………………..………………
PC MEMORY
Objectives
Introduction
Memory Types
RAM Technology
Memory Sockets
Operational Characteristics
Memory CMOS Settings
RAM compatibility Issues
Memory Failure
Labs
Introduction
Memory is the workspace of the computer’s processor. It is the temporary storage where the
programs and data being operated on by the processor must reside. Memory storage is
considered temporary because the data and programs remain in the memory only as long as
the computer has electrical power or is not reset.
Hard drive space is “permanent” storage in which the data consists of magnetized spots
within the surface of the recording medium within the hard drive, and it remains there long
after you turn the power off. Hence, before shutdown or reset, any data that has been edited
should be saved to a permanent storage device (such as a hard disk) for use in the future.
The memory consists of computer chips in which the data resides, but only for as long at the
computer remains powered up. Furthermore, computers use several types of memory, each
with a different function and different physical form.
Memory Types
Typically, when people discuss memory, they are referring to random access memory, or
RAM, so called because data stored in RAM is accessible in any (random) order. Most of the
memory in a PC is RAM. However, some very important memory is read-only memory, or
ROM.
RAM also stores instructions about currently running applications. For example, when you
start a computer game, a large set of the game’s instructions (including how it works, how
the screen should look, which sounds must be generated) is loaded into memory. The
processor can retrieve these instructions much faster from RAM than it can from the hard
drive, where the game normally resides until you start to use it. Within certain limits, the
more information stored in memory, the faster the computer will run. In fact, one of the most
common computer upgrades is to increase the amount of RAM. The computer continually
reads, changes, and removes the information in RAM. It is also volatile, meaning that it
PC Memory Page 78
Chapter 4
cannot work without a steady supply of power, so when you turn your computer off, the
information in RAM is lost.
RAM
The RAM (random access memory) constitutes the internal memory of the CPU for storing
data, programs and program results. It is read/write memory. Since access time in RAM is
independent of the address to the word that is, each storage location inside the memory is
as easy to reach as other location & takes the same amount of time. We can reach into the
memory at random & extremely fast but can also be quite expensive.
RAM is volatile, i.e., data stored in it is lost when we switch off the computer or if there is a
power failure. Hence, a backup uninterruptible power system (UPS) is often used with
computers. RAM is small, both in terms of its physical size and in the amount of data it can
hold.
RAM Technologies
A+ Exam Tip: The A+ 220-801 exam expects you to know the purposes and
characteristics of the following memory technologies: DRAM, SRAM, SDRAM, DDR,
DDR2, DDR3, and Rambus.
The word static indicates that the memory retains its contents as long as power remains
applied. However, data is lost when the power gets down due to volatile nature. SRAM chips
use a matrix of 6-transistors and no capacitors. Transistors do not require power to prevent
leakage, so SRAM need not have to be refreshed on a regular basis. Because of the extra
space in the matrix, SRAM uses more chips than DRAM for the same amount of storage
space, thus making the manufacturing costs higher.
PC Memory Page 79
Chapter 4
Large size
Expensive
High power consumption
This design makes it necessary for the DRAM chip to receive a constant power refresh from
the computer to prevent the capacitors from losing their charge. This constant refresh makes
DRAM slower than SRAM, and causes the DRAM chip to draw more power from the
computer than an SRAM chip.
Because of its low cost and high capacity, manufacturers use DRAM as “main” memory in
the computer. The term DRAM typically describes any type of memory that uses the
technology just described. However, the first DRAM chips were very slow (~80–90 ns), so
faster variants have been developed.
SDRAM
Synchronous dynamic RAM, or SDRAM, runs at the speed of the system bus (up to 100–
133 MHz). However, as faster system bus speeds developed, faster types of DRAM, such as
RDRAM, DDR, and DDR2, replaced SDRAM. SDRAM is used only in systems that support
it, and that have 168-pin DIMM sockets on the motherboard. The 168-pin SDRAM module
has two notches.
RDRAM
The Pentium 4 has a much faster front side bus than previous processors, requiring faster
RAM. One answer to this need is RDRAM (Rambus Dynamic RAM), which gets its name
from the company that developed it—Rambus, Inc. RDRAM uses a special Rambus channel
that has a data transfer rate of 800 MHz, and one can double the channel width, resulting in
a 1.6 GHz data transfer.
You can only use RDRAM in computers with special RDRAM channels and slots, called
RIMM slots that will not accept other types of memory sticks. RDRAM RIMM sticks for
desktops have 184, 232, or 326 pins. The difference is in the data bus width of the RAM
stick. 16-bit RIMMs have 184 pins; 32-bit RIMMS have 232 pins, but 64-bit RIMM uses a
326-pin connector.
DDR
Double-data rate (DDR) SDRAM doubles the rate of speed at which standard SDRAM can
process data. That means DDR is roughly twice as fast as standard RAM.
PC Memory Page 80
Chapter 4
The JEDEC Solid State Technology Association (once known as the Joint Electron Device
Engineering Council [JEDEC]) defines the standards for DDR SDRAM. There are two sets of
standards involved here—one for the module (the “stick”) and another for the chips that
populate the module. The module specifications include PC-1600, PC-2100, PC-2700, and
PC-3200. This new labeling refers to the total bandwidth of the memory, as opposed to the
old standard, which listed the speed rating (in MHz) of the SDRAM memory—in this case,
PC66, PC100, and PC133. The numeric value in the PC66, PC100, and PC133 refers to the
MHz speed at which the memory operates. A PC-3200 DDR SDRAM module populated with
DDR-400 chips can operate at 3.2 gigabytes per second. Each stick or module specification
pairs the stick with chips of a certain chip specification.
A stick of DDR memory is a 184-pin DIMM module with a notch on one end so that it can
only fit into the appropriate DIMM socket on a motherboard. It requires only 2.5 V compared
to SDRAM’s requirement of 3.3 V.
DDR2
Double-data-rate two (DDR2) SDRAM is replacing the original DDR standards, now referred
to as DDR1. DDR2 can handle faster clock rates than DDR1, beginning at 400 MHz. As with
DDR1, there are specifications for the chips, as well as the modules.
DDR2 sticks are only compatible with motherboards that use a special 240-pin DIMM socket.
The DDR2 DIMM stick notches are different from those in a DDR1 DIMM. A DDR2 DIMM
only requires 1.8 V compared to 2.5 V for DDR1. Manufacturers of motherboards and
processors were slow to switch to support for DDR2, mainly due to problems with excessive
heat. Once manufacturers solved the problems, they brought out compatible motherboards,
chip sets, and CPUs for DDR2.
DRDRAM
Direct Rambus DRAM (DRDRAM), named for Rambus, the company that designed it, is a
legacy proprietary SDRAM technology, sometimes called RDRAM, dropping direct, and most
often associated with server platforms. Although other specifications preceded it, the first
motherboard DRDRAM model was known as PC800. As with non-DRDRAM specifications
that use this naming convention, PC800 specifies that, using a faster 400MHz actual clock
signal and double-pumping like DDR SDRAM, an effective frequency and FSB speed of
800MHz is created.
VRAM
Video RAM (VRAM) is a specialized type of memory used only with video adapters. The
video adapter is one of the computer’s busiest components, so, to keep up with video
requirements, many adapters have an on-board microprocessor and special video RAM. The
adapter can process requests independently of the CPU and then store its results in the
VRAM until the CPU retrieves it.
VRAM is fast, and the computer can simultaneously read from it and write to it. The result is
better and faster video performance. Because VRAM includes more circuitry than regular
DRAM, VRAM modules are slightly larger. The term video RAM refers to both a specific type
of memory and a generic term for all RAM used by the video adapter (much like the term
DRAM, which is often used to denote all types of memory that are dynamic). Faster versions
of video memory have been recently introduced, including WRAM, which includes a
technique for using video RAM to perform Windows-specific functions to speed up the OS.
Memory Sockets
By knowing a CPU’s address bus width, you will know the maximum system RAM capacity it
can access. The minimum amount of memory to install is best determined by considering the
PC Memory Page 81
Chapter 4
requirements of the operating system to be installed and how the computer will be used. In
addition, by knowing the CPU’s data bus width, you will know how many RAM modules you
must install at a time to create a full memory bank. Before purchasing a motherboard,
ensure that it has the proper RAM and CPU slots, and that the CMOS settings are
appropriate for your needs.
A motherboard must support both the technology and the form factor of a memory module,
such as SIMM, RIMM, or DIMM. The system must also support the data width of the memory
as well as its method of error correction. Today’s typical motherboard has DIMM or RIMM
memory sockets. If you work on older PCs with processors predating the Pentium 4, you
may see SIMM sockets. Your choice of memory modules will depend on what type of
memory the motherboard supports.
SIMM
This is an outdated technology. SIMMs were so common and you will run into them if you
work on older PCs. Single inline memory module (SIMM) sockets were produced in two
sizes to accommodate either 30-pin or 72-pin SIMM memory modules. Thirty-pin SIMMs are
8-bit, meaning that data can be transferred into or out of the module 8 bits at a time. The
other form you may run into are the 72-pin SIMM sockets, which are 32 bits wide, slightly
shorter than DIMM sockets, and usually colored white with small metal clips.
DIMM
Dual Inline Memory Module (DIMM) sockets look similar to SIMM sockets but are longer and
often dark in color with plastic clips at each end. DIMM sockets for PCs come in three sizes:
168-pin for SDRAM sticks, 184-pin for DDR1 RAM, and 240-pin for DDR2 RAM sticks. You
do not have to install DIMMs in pairs.
PC Memory Page 82
Chapter 4
SODIMM
Notebook computers and other computers that require much smaller components don’t use
standard RAM packages, such as the DIMM. Instead, they call for a much smaller memory
form factor, such as a small outline DIMM. SODIMMs are available in many physical
implementations, including the older 32-bit (72- and 100-pin) configuration and newer 64-bit
(144-pin SDR SDRAM, 200-pin DDR/DDR2, and 204-pin DDR3) configurations.
All 64-bit modules have a single keying notch. The 144-pin module’s notch is slightly off-
center. Note that although the 200-pin SODIMMs for DDR and DDR2 have slightly different
keying, it’s not so different that you don’t need to pay close attention to differentiate the two.
They are not, however, interchangeable. Figure 1shows an example of a 144-pin, 64-bit
SDR module. Figure 2 is a photo of a 200-pin DDR2 SODIMM.
PC Memory Page 83
Chapter 4
RIMM
The Rambus Inline Memory (RIMM) socket is specifically for use with RDRAM. The
motherboard must have the special RIMM sockets and other support for this type of RAM. In
spite of having a 64-bit data bus that matches the bus width of most new processors, RIMMs
must be installed in pairs of equal capacity and speed because RDRAM has a dual-channel
architecture. This means that the specially designed Northbridge alternates between two
modules to increase the speed of data retrieval. You cannot leave unused RIMM sockets
empty.
Any unused socket pairs must have a specially designed pass-through device called a C-
RIMM (continuity RIMM). The C-RIMM must be installed into each unused RIMM socket in
the same manner in which the RIMM RDRAM is installed.
RIMM sockets look just like DIMM sockets but are keyed so that only RIMM modules can be
inserted. They are also proprietary, generally more expensive, and less common than
DIMMs. The 184-pin RIMMs with a 16-bit data path and the 232-pin 32-bit RIMMs are readily
available. The newest RIMMS are 64-bit RIMMs with 326-pin connectors.
NB: The word “single” in SIMM and the word “dual” in DIMM refer to the rows of connectors
one each side rather than to the number of sides that contain memory chips. For example,
each side of each pin (connector) on a DIMM has a separate function. However, on a SIMM,
each side of a pin shares the same function. This allows one DIMM to make up the 64-bit
data path.
A+ Exam Tip: Be sure you understand DIMMs and RIMMs because they are exam objectives.
Operational Characteristics
Among the operational characteristics of DRAM modules, regardless of the type, are
memory banks, error checking methods, and single-sided versus double-sided.
PC Memory Page 84
Chapter 4
Memory Banks
The bit width of a memory module is very important; the term refers to how much information
the processor can access from or write to memory in a single cycle. A memory bank
represents the number of memory modules required to match the data bus width of the
processor. For example, consider a computer with a Pentium 4 processor with a 64-bit data
bus. You want to install 4 GB of RAM. You could add two 2 GB memory modules or four 1
GB memory modules. Each memory module would take a single memory bank.
Parity
In parity, every eight-bit byte of data is accompanied by a ninth bit (the parity bit), which is
used to determine the presence of errors in the data. There are two types of parity: odd and
even.
In odd parity, the parity bit is used to ensure that the total number of 1s in the data stream is
odd. For example, suppose a byte consists of the following data: 11010010. The number of
1s in this data is 4, an even number. The ninth bit will then be a 1, to ensure that the total
number of 1s is odd: 110100101. Even parity is the opposite of odd parity; it ensures that the
total number of 1s is even. For example, suppose a byte consists of the following data:
11001011. The ninth bit would then be a 1 to ensure that the total number of 1s is 6, an even
number.
Parity is not failure-proof. Suppose the preceding data stream contained two errors:
101100101. If the computer was using odd parity, the error would slip through (try it; count
the 1s). However, creating parity is quick and does not inhibit memory access time the way a
more sophisticated error-checking routine would.
DIMM is 64 bits wide, but parity-checking DIMM has 8 extra bits (1 parity bit for every 8 data
bits). Therefore, a DIMM with parity is 64 + 8 = 72 bits wide. While parity is not often used in
memory modules, there is an easy way to determine if a memory module is using parity
because it will have an odd number of chips. A non-parity memory module will have an even
number of chips. This is true, even if the module only has two or three chips in total.
If your system supports parity, you must use parity memory modules. You cannot use
memory with parity if your system does not support it. The motherboard manual will define
the memory requirements.
PC Memory Page 85
Chapter 4
Majority of today’s computer systems do not support Memory that uses parity. However,
other computing devices use parity. One example of parity use is in some special drive
arrays, called RAID 5, mostly found in servers. Therefore, it is useful to understand the
basics of parity.
A+ Exam Tip: Memory parity versus non-parity is an exam objective for the A+ Essentials
2006 exam, so be sure that you understand the concepts involved.
ECC
Error-correcting code (ECC) is a more sophisticated method of error checking than parity,
although it also adds an extra bit per byte to a stick of RAM. Software in the system memory
controller uses the extra bits to both detect and correct errors. There are several algorithms
used in ECC.
You do not have to configure RAM capacities. Simply install RAM in the computer and the
BIOS automatically counts it at startup. However, you can use the CMOS settings to enable
or disable the memory’s ability to use parity error checking (although you can use this setting
only if the RAM supports parity).
Warning: If you enable the CMOS parity option but are not using memory that supports
parity, the computer will not boot properly.
The speed of EDO and FPM memory modules are measured in nanoseconds (ns). The
memory module with the lowest ns rating is the fastest. EDO and FPM also have
compatibility issues with the system bus.
Faster DRAM can be installed on a slower system bus and it will not affect performance. The
system will operate at the bus speed even if faster memory is installed. However, a slower or
mixed DRAM module cannot be installed on a system with faster DRAM requirements or
different clocked DRAM.
Legacy machines might require parity RAM. Parity RAM performs error-checking
calculations for every eighth bit of data stored. Today, RAM is non-parity and does not
perform parity calculations on data. Never mix parity and non-parity SIMMs. For older
systems, the setup utility has an option for enabling or disabling RAM parity checking. Also,
error correction code (ECC) and non-ECC RAM cannot be mixed. ECC has the ability to
PC Memory Page 86
Chapter 4
correct data errors and is typically found in file servers. The following scenario helps to
illustrate an issue with RAM.
Memory Failure
Modern computers run software applications that are very memory intensive. These
programs continually put stress on the memory modules, potentially causing them to fail.
There are several common symptoms for failed memory:
HIMEM.SYS has problems loading.
Computer appears inoperative and does not boot.
Windows program is unstable or programs are freezing.
POST errors exist.
RAM failures are either sudden or intermittent. Overused or defective memory can cause the
system to fail at any time. System performance is a good indication of the state of the
memory. If the system is running smoothly and applications rarely stall, the RAM workload is
well within the RAM specifications. If the computer is multitasking and frequently freezes, the
RAM is probably insufficient for the workload.
Physical damage or failure of memory is fatal, meaning that when there is such a problem,
the computer will not boot at all. However, you should be aware of some nonfatal error
indicators. 2** error codes are typical of memory problems.
If you turn on the computer and it does not even complete the POST or it does nothing at all,
and you have eliminated power problems, there might be a problem with the main memory.
The solution to a memory problem is to remove the offending component and replace it with
a new one. If the error persists, the memory might be in a damaged slot or socket on the
motherboard. In this case, replace the motherboard, or the entire PC.
The computer may not report some RAM errors at all. That is, if an entire memory module
does not work, the computer might just ignore it and continue to function normally without it.
At startup, watch the RAM count on the screen (if BIOS configuration allows this) to ensure
that the total amount matches the capacity installed in the machine. If this amount comes up
significantly short, you probably have to replace a memory module.
LABS
The installation or removal of memory modules is similar for SIMMs, DIMMs, and RIMMs.
The following sections describe the specifics of each type of socket. Before you begin, take
steps to protect against static electricity damage to the memory modules and motherboard.
PC Memory Page 87
Chapter 4
2. Gently press down on the DIMM or RIMM. The retention clips on the side should
rotate into the locked position. You might need to guide them into place with your
fingers.
3. To remove a DIMM or RIMM, press the retention clips outward, as shown below,
which lifts the module slightly, then lift the module straight up.
Scenario
One of the users in Accounting has recently had installed on her PC new software that is
required for a special project she is working on. The application consumes a great deal of
memory to run, and she complains that when she tries to work with the software, computer
performance slows dramatically. Your supervisor has determined that the user’s PC has
insufficient RAM to run this piece of software along with the other applications she must
access as part of her job. You have been assigned to upgrade the amount of memory in her
machine. You will need a computer with access to the Internet to research the type of RAM
stick that is correct for the particular PC you’ll be working on. You will also need to have a
computer in which you can install a RAM stick and the appropriate RAM itself. Make sure
you have a screwdriver to remove the side panel of the computer so you can access the
interior. You’ll need to take the appropriate steps to prevent ESD damage to the sensitive
electrical components inside the PC as well as to the RAM stick you are about to install.
Using the following Steps, install more RAM onto the computer system
Required Equipment
Screwdriver for use on computers without the latch system.
PC Memory Page 88
Chapter 4
Determining the Correct Type of Memory for a Particular PC
PC Memory Page 89
Chapter 4
5. Allow the computer to continue to boot and the operating system to load.
You have completed this task when you have verified that the amount of RAM has increased
to the correct amount. This amount will vary depending on how much RAM the computer
originally had and how much you added.
PC Memory Page 90
Chapter 4
Revision Questions
Complete the following questions by ticking the correct answer(s).
PC Memory Page 91
Chapter 5
BUSES
Objectives
Introduction
Bus Architecture
Functions of Buses
Expansion Slots
I/O Buses
Local Buses
Other Buses
Labs
Introduction
The heart of any motherboard is the various buses that carry signals between the
components. The term bus refers to pathways that power, data, or control signals use to
travel from one component to another in the computer. This pathway is used for
communication and can be established between two or more computer elements. There are
many types of busses, including the processor bus, used by data traveling into and out of the
processor. The address and data busses are both part of the processor bus. Another type of
bus is the memory bus, which is located on the motherboard and used by the processor to
access memory. In addition, each PC has an expansion bus of one or more types.
The PC has a hierarchy of different buses. Most modern PCs have at least three buses;
some have four or more. They are hierarchical because each slower bus is connected to the
faster one above it. Each device in the system is connected to one of the buses, and some
devices (primarily the chipset) act as bridges between the various buses.
PC Buses Page 92
Chapter 5
primarily by the processor to pass information to and from cache or main memory
and the North Bridge of the chipset. The processor bus in a modern system runs at
66MHz, 100MHz, 133MHz, 200MHz, 266MHz, 400MHz, 533MHz, or 800MHz and is
normally 64 bits (8 bytes) wide.
AGP bus. This is a high-speed 32-bit bus specifically for a video card. It runs at
66MHz (AGP 1x), 133MHz (AGP 2x), 266MHz (AGP 4x), or 533MHz (AGP 8x),
which allows for a bandwidth of up to 2,133MBps. It is connected to the North Bridge
or Memory Controller Hub of the chipset and is manifested as a single AGP slot in
systems that support it.
PCI bus. This is usually a 33MHz 32-bit bus found in virtually all newer 486 systems
and Pentium and higher processor systems. Some newer systems include an
optional 66MHz 64-bit version—mostly workstations or server-class systems. This
bus is generated by either the chipset North Bridge in North/South Bridge chipsets or
the I/O Controller Hub in chipsets using hub architecture. This bus is manifested in
the system as a collection of 32-bit slots, normally white in color and numbering from
four to six on most motherboards. High-speed peripherals, such as SCSI adapters,
network cards, video cards, and more, can be plugged into PCI bus slots.
ISA bus. This is an 8MHz 16-bit bus that has disappeared from recent systems after
first appearing in the original PC in 8-bit, 5MHz form and in the 1984 IBM AT in full
16-bit 8MHz form. It is a very slow-speed bus, but it was ideal for certain slow-speed
or older peripherals. It has been used in the past for plug-in modems, sound cards,
and various other low-speed peripherals. The ISA bus is generated by the South
Bridge part of the motherboard chipset, which acts as the ISA bus controller and the
interface between the ISA bus and the faster PCI bus above it. The Super I/O chip
usually was connected to the ISA bus on systems that included ISA slots.
Some newer motherboards feature a special connector called an Audio Modem Riser (AMR)
or a Communications and Networking Riser (CNR). These are dedicated connectors for
cards that are specific to the motherboard design to offer communications and networking
options. They are not designed to be general-purpose bus interfaces, and few cards for
these connectors are offered on the open market. Usually, they're offered only as an option
with a given motherboard.
They are designed such that a motherboard manufacturer can easily offer its boards in
versions with and without communications options, without having to reserve space on the
board for optional chips.
PC Buses Page 93
Chapter 5
Normal network and modem options offered publicly, for the most part, will still be PCI based
because the AMR/CNR connection is somewhat motherboard specific. Several hidden
buses exist on modern motherboards—buses that don't manifest themselves in visible slots
or connectors. These are buses designed to interface chipset components, such as the Hub
Interface and the LPC bus.
The Hub Interface is a quad-clocked (4x) 66MHz 8-bit bus that carries data between the
MCH and ICH in hub architecture chipsets made by Intel. It operates at a bandwidth of
266MBps and was designed as a chipset component connection that is faster than PCI and
yet uses fewer signals for a lower-cost design.
Some recent workstation/server chipsets from Intel use faster versions of the hub interface.
The most recent chipsets from major third-party vendors also bypass the PCI bus with direct
high-speed connections between chipset components. In a similar fashion, the LPC bus is a
4-bit bus that has a maximum bandwidth of 6.67MBps; it was designed as an economical
onboard replacement for the ISA bus. In systems that use LPC, it typically is used to connect
Super I/O chip or motherboard ROM BIOS components to the main chipset.
LPC is nearly as fast as ISA and yet uses far fewer pins and enables ISA to be eliminated
from the board entirely. The system chipset is the conductor that controls the orchestra of
system components, enabling each to have its turn on its respective buses.
Many of the buses use multiple data cycles (transfers) per clock cycle to achieve greater
performance. Therefore, the data transfer rate is higher than it would seem for a given clock
rate, which allows for an easy way to take an existing bus and make it go faster in a
backward-compatible way.
PC Buses Page 94
Chapter 5
Processor Bus/Front-Side Bus
The processor bus (also called the front-side bus or FSB) is the communication pathway
between the CPU and motherboard chipset, more specifically the North Bridge or Memory
Controller Hub.
This bus runs at the full motherboard speed—typically between 66MHz and 800MHz in
modern systems, depending on the particular board and chipset design. This same bus also
transfers data between the CPU and an external (L2) memory cache on Socket-7 (Pentium
class) systems.
Memory Bus
The memory bus is used to transfer information between the CPU and main memory—the
RAM in your system. This bus is connected to the motherboard chipset North Bridge or
Memory Controller Hub chip. Depending on the type of memory your chipset (and therefore
motherboard) is designed to handle, the North Bridge runs the memory bus at various
speeds.
The best solution is if the memory bus runs at the same speed as the processor bus.
Systems that use PC133 SDRAM have a memory bandwidth of 1,066MBps, which is the
same as the 133MHz CPU bus. In another example, Athlon systems running a 266MHz
processor bus also run PC2100 DDR-SDRAM, which has a bandwidth of 2,133MBps—
exactly the same as the processor bus in those systems.
In addition, systems running a Pentium 4 with its 400MHz processor bus also use dual-
channel RDRAM memory, which runs 1,600MBps for each channel, or a combined
bandwidth (both memory channels run simultaneously) of 3,200MBps, which is exactly the
same as the Pentium 4 CPU bus. Pentium 4 systems with the 533MHz bus run dual-channel
DDR PC2100 or PC2700 modules, which match or exceed the throughput of the 4,266MBps
processor bus.
Running memory at the same speed as the processor bus negates the need for having
cache memory on the motherboard. That is why when the L2 cache moved into the
processor, nobody added an L3 cache to the motherboard. Some very high-end processors,
such as the Itanium and Itanium 2, have integrated 2MB–4MB of full-core speed L3 cache
into the CPU. Eventually, this should make it down to more mainstream desktop systems.
Bus Architecture
In reality, each bus is generally constituted of 50 to 100 distinct physical lines, divided into
three subassemblies:
The address bus (sometimes called the memory bus) transports memory addresses
which the processor wants to access in order to read or write data. It is a
unidirectional bus.
The data bus transfers data signals coming from or going to the processor. It is a
bidirectional bus. The speed of data bus is determined by the number of wires present
in it. In data bus, each wire transmits a single bit at a time so the higher the number of
wires the greater the speed.
The control bus (or command bus) transports orders and synchronization signals
coming from the control unit and travelling to all other hardware components. It is a
bidirectional bus, as it also transmits response signals from the hardware.
PC Buses Page 95
Chapter 5
Functions of Buses
Expansion Slots
The I/O bus or expansion slots enable your CPU to communicate with peripheral devices.
The bus and its associated expansion slots are needed because basic systems can't
possibly satisfy all the needs of all the people who buy them. The I/O bus enables you to add
devices to your computer to expand its capabilities.
The most basic computer components, such as sound cards and video cards, can be
plugged into expansion slots; you also can plug in more specialized devices, such as
network interface cards, SCSI host adapters, and others.
I/O Buses
Since the introduction of the first PC, many I/O buses have been introduced. The reason is
that faster I/O speeds are necessary for better system performance. This need for higher
performance involves three main areas:
Faster CPUs
Increasing software demands
PC Buses Page 96
Chapter 5
Greater multimedia requirements
One of the primary reasons new I/O bus structures have been slow in coming is compatibility
that old catch-22 that anchors much of the PC industry to the past. One of the hallmarks of
the PC's success is its standardization. This standardization spawned thousands of third-
party I/O cards, each originally built for the early bus specifications of the PC. If a new high-
performance bus system was introduced, it often had to be compatible with the older bus
systems so the older I/O cards would not be obsolete. Therefore, bus technologies seem to
evolve rather than make quantum leaps forward.
You can identify different types of I/O buses by their architectures. The main types of I/O
buses are detailed earlier in this chapter. The main differences among buses consist
primarily of the amounts of data they can transfer at one time and the speeds at which they
can do it. The following sections describe the various types of PC buses.
ISA Bus
Industry Standard Architecture (ISA) is the bus architecture that was introduced as an 8-bit
bus with the original IBM PC in 1981; it was later expanded to 16 bits with the IBM PC/AT in
1984. ISA is the basis of the modern personal computer and was the primary architecture
used in the vast majority of PC systems until the late 1990s. It might seem amazing that
such a presumably antiquated architecture was used for so long, but it provided reliability,
affordability, and compatibility, plus this old bus is still faster than many of the peripherals we
connect to it!
The ISA bus has become obsolete from all recent desktop systems, and few companies
make or sell ISA cards anymore. The ISA bus continues to be popular with industrial
computer (PICMG) designs, but it is expected to eventually fade away from these as well.
8-BIT
16-BIT
ISA BUS
ISA
BUSES
Two versions of the ISA bus exist, based on the number of data bits that can be transferred
on the bus at a time. The older version is an 8-bit bus; the newer version is a 16-bit bus. The
original 8-bit version ran at 4.77MHz in the PC and XT, and the 16-bit version used in the AT
ran at 6MHz and then 8MHz. Later, the industry as a whole agreed on an 8.33MHz
maximum standard speed for 8/16-bit versions of the ISA bus for backward-compatibility.
Some systems have the capability to run the ISA bus faster than this, but some adapter
cards will not function properly at higher speeds. ISA data transfers require anywhere from
two to eight cycles. Therefore, the theoretical maximum data rate of the ISA bus is about
8MBps, as the following formula illustrates:
8.33MHz x 2 bytes (16 bits) [db] 2 cycles per transfer = 8.33MBps
PC Buses Page 97
Chapter 5
The bandwidth of the 8-bit bus would be half this figure (4.17MBps). Remember, however,
that these figures are theoretical maximums. Because of I/O bus protocols, the effective
bandwidth is much lower typically by almost half. Even so, at about 8MBps, the ISA bus is
still faster than many of the peripherals connected to it, such as serial ports, parallel ports,
floppy controllers, keyboard controllers, and so on.
32-Bit Buses
After 32-bit CPUs became available, it was some time before 32-bit bus standards became
available. Before MCA and EISA specs were released, some vendors began creating their
own proprietary 32-bit buses, which were extensions of the ISA bus. Fortunately, these
proprietary buses were few and far between.
The expanded portions of the bus typically are used for proprietary memory expansion or
video cards. Because the systems are proprietary (meaning that they are nonstandard),
pinouts and specifications are not available.
IBM wanted not only to replace the old ISA standard, but also to require vendors to license
certain parts of the technology. Many owed for licenses on the ISA bus technology that IBM
also created, but because IBM had not been aggressive in its licensing of ISA, many got
away without any license. Problems with licensing and control led to the development of the
competing EISA bus (see the next section on the EISA bus) and hindered acceptance of the
MCA bus.
MCA systems produced a new level of ease of use; they were plug-and-play before the
official Plug and Play specification even existed. An MCA system had no jumpers and
switches neither on the motherboard nor on any expansion adapter. Instead you used a
special Reference disk, which went with the particular system, and Option disks, which went
with each of the cards installed in the system. After a card was installed, you loaded the
Option disk files onto the Reference disk; after that, you didn't need the Option disks
anymore. The Reference disk contained the special BIOS and system setup program
necessary for an MCA system, and the system couldn't be configured without it.
The EISA bus added 90 new connections (55 new signals plus grounds) without increasing
the physical connector size of the 16-bit ISA bus. At first glance, the 32-bit EISA slot looks a
lot like the 16-bit ISA slot. The EISA adapter, however, has two rows of stacked contacts.
The first row is the same type used in 16-bit ISA cards; the other, thinner row extends from
the 16-bit connectors. Therefore, ISA cards can still be used in EISA bus slots. Although this
compatibility was not enough to ensure the popularity of EISA buses, it is a feature that was
carried over into the VL-Bus standard that followed.
PC Buses Page 98
Chapter 5
The EISA bus can handle up to 32 bits of data at an 8.33MHz cycle rate. Most data transfers
require a minimum of two cycles, although faster cycle rates are possible if an adapter card
provides tight timing specifications. The maximum bandwidth on the bus is 33MBps, as the
following formula shows:
8.33MHz x 4 bytes (32 bits) = 33MBps
Local Buses
The I/O buses discussed so far (ISA, MCA, and EISA) have one thing in common: relatively
slow speed. The local bus concept addresses the speed issue. The three main local buses
found in PC systems are
VL-Bus (VESA local bus)
PCI
AGP
The speed limitation of ISA, MCA, and EISA is a carryover from the days of the original PC
when the I/O bus operated at the same speed as the processor bus. As the speed of the
processor bus increased, the I/O bus realized only nominal speed improvements, primarily
from an increase in the bandwidth of the bus.
The speed problem became acute when graphical user interfaces (such as Windows)
became prevalent. These systems require the processing of so much video data that the I/O
bus became a literal bottleneck for the entire computer system. In other words, it did little
good to have a processor that was capable of 66MHz450MHz or faster if you could put data
through the I/O bus at a rate of only 8MHz.
An obvious solution to this problem is to move some of the slotted I/O to an area where it
could access the faster speeds of the processor bus much the same way as the external
cache. The figure below shows this arrangement.
PC Buses Page 99
Chapter 5
The VL-Bus can move data 32 bits at a time, enabling data to flow between the CPU and a
compatible video subsystem or hard drive at the full 32-bit data width of the 486 chip. The
maximum rated throughput of the VL-Bus is 133MBps. In other words, local bus went a long
way toward removing the major bottlenecks that existed in earlier bus configurations.
Unfortunately, the VL-Bus did not seem to be a long-lived concept. The design was simple
indeed just take the pins from the 486 processor and run them out to a card connector
socket. So, the VL-Bus is essentially the raw 486 processor bus. This allowed a very
inexpensive design because no additional chipsets or interface chips were required. A
motherboard designer could add VL-Bus slots to its 486 motherboards very easily and at a
very low cost. This is why these slots appeared on virtually all 486 system designs overnight.
Problems arose with timing glitches caused by the capacitance introduced into the circuit by
different cards. Because the VL-Bus ran at the same speed as the processor bus, different
processor speeds meant different bus speeds, and full compatibility was difficult to achieve.
Although the VL-Bus could be adapted to other processors including the 386 or even the
Pentium, it was designed for the 486 and worked best as a 486 solution only. Despite the
low cost, after a new bus called PCI appeared, VL-Bus fell into disfavor very quickly. It never
Physically, the VL-Bus slot was an extension of the slots used for whatever type of base
system you have. If you have an ISA system, the VL-Bus is positioned as an extension of
your existing 16-bit ISA slots. The VESA extension has 112 contacts and uses the same
physical connector as the MCA bus.
PCI Bus
PCI redesigned the traditional PC bus by inserting another bus between the CPU and the
native I/O bus by means of bridges. Rather than tap directly into the processor bus, with its
delicate electrical timing (as was done in the VL-Bus), a new set of controller chips was
developed to extend the bus as shown below.
The PCI bus adds another layer to the traditional bus configuration. PCI bypasses the
standard I/O bus; it uses the system bus to increase the bus clock speed and take full
advantage of the CPU's data path. Systems that integrate the PCI bus became available in
mid-1993 and have since become a mainstay in the PC. Information typically is transferred
across the PCI bus at 33MHz and 32 bits at a time. The bandwidth is 133MBps, as the
following formula shows:
33.33MHz x 4 bytes (32 bits) = 133MBps
Although 32-bit 33MHz PCI is the standard found in most PCs, there are now several
variations on. Many recent PCs now also feature PCI-Express x1 and PCI-Express x16
slots.
PCI-Express
PCI-Express is another example of how the PC is moving from parallel to serial interfaces.
Earlier generation bus architectures in the PC have been of a parallel design, in which
multiple bits are sent simultaneously over several pins in parallel. The more bits sent at a
time, the faster the bus throughput is. The timing of all the parallel signals must be the same,
which becomes more and more difficult to do over faster and longer connections. Even
though 32 bits can be transmitted simultaneously over a bus such as PCI or AGP,
propagation delays and other problems cause them to arrive slightly skewed at the other
end, resulting in a time difference between when the first and last of all the bits arrive .
A serial bus design is much simpler, sending 1 bit at a time over a single wire, at much
higher rates of speed than a parallel bus would allow. By sending the bits serially, the timing
of individual bits or the length of the bus becomes much less of a factor. By combining
multiple serial data paths, even faster throughputs can be realized that dramatically exceed
the capabilities of traditional parallel buses.
PCI-Express is a very fast serial bus design that is backward-compatible with current PCI
parallel bus software drivers and controls. In PCI-Express, data is sent full duplex
(simultaneously operating one-way paths) over two pairs of differentially signaled wires
called a lane . Each lane allows for about 250MBps throughput in each direction initially, and
the design allows for scaling from 1 to 2, 4, 8, 16, or 32 lanes. For example, a high-
bandwidth configuration with 8 lanes allowing 8 bits to be sent in each direction
simultaneously would allow up to 2000MBps bandwidth (each way) and use a total of only
40 pins (32 for the differential data pairs and 8 for control). Future increases in signaling
speed could increase that to 8000MBps each way over the same 40 pins. This compares to
PCI, which has only 133MBps bandwidth (one way at a time) and requires more than 100
pins to carry the signals.
The original AGP specification also defines a 2x mode, in which two transfers are performed
every cycle, resulting in 533MBps. Using an analogy in which every cycle is equivalent to the
back-and-forth swing of a pendulum, the 1x mode is thought of as transferring information at
the start of each swing. In 2x mode, an additional transfer would occur every time the
pendulum completed half a swing, thereby doubling performance while technically
maintaining the same clock rate, or in this case, the same number of swings per second.
Although the earliest AGP cards supported only the AGP 1x mode, most vendors quickly
shifted to the AGP 2x mode.
Because AGP is independent of PCI, using an AGP video card frees up the PCI bus for
more traditional input and output, such as for IDE/ATA or SCSI controllers, USB controllers,
sound cards, and so on.
Besides faster video performance, one of the main reasons Intel designed AGP was to allow
the video card to have a high-speed connection directly to the system RAM, which would
enable a reasonably fast and powerful video solution to be integrated at a lower cost. AGP
allows a video card to have direct access to the system RAM, either enabling lower-cost
video solutions to be directly built in to a motherboard without having to include additional
video RAM or enabling an AGP card to share the main system memory. However, few AGP
cards in recent years share main memory. Instead, they have their own high-speed memory
(as much as 256MB in some recent models). Using dedicated memory directly on the video
card is especially important when running high-performance 3D video applications. AGP
enables the speed of the video card to pace the requirements for high-speed 3D graphics
rendering as well as full-motion video on the PC.
Although AGP 8x (2133MBps) is 16 times faster than 32-bit 33MHz PCI (133MBps), AGP 8x
is only about half as fast as PCI-Express x16 (4000MBps). Starting in mid-2004,
motherboard and system vendors began to replace AGP 8x with PCI-Express x16 expansion
slots in high-performance systems using Pentium 4 and Athlon 64 processors. As of early
2006, most motherboards in all price ranges feature PCI-Express x16 slots in place of AGP.
This trend will eventually spell the end of AGP.
Others
SCSI
Short for Small Computer System Interface, SCSI is a parallel interface standard used by
Apple Macintosh computers, PC's and Unix systems for attaching peripheral devices to a
computer.
Type Variant
Method(s)
USB 1X
First released in 1996, the original USB 1.0 standard offered data rates of 1.5 Mbps. The
USB 1.1 standard followed with two data rates: 12 Mbps for devices such as disk drives that
need high-speed throughput and 1.5 Mbps for devices such as joysticks that need much less
bandwidth.
USB 2X
In 2002 a newer specification USB 2.0, also called Hi-Speed USB 2.0, was introduced. It
increased the data transfer rate for PC to USB device to 480 Mbps, which is 40 times faster
than the USB 1.1 specification. With the increased bandwidth, high throughput peripherals
such as digital cameras, CD burners and video equipment could now be connected with
USB.
IEEE 1394
The IEEE 1394 is a very fast external serial bus interface standard that supports data
transfer rates of up to 400Mbps (in 1394a) and 800Mbps (in 1394b).This makes it ideal for
devices that need to transfer high levels of data in real-time, such as video devices. It was
developed by apple with the name firewire.
A single 1394 port can be used to connect up 63 external devices.
It supports Plug and play
Supports hot plugging, and
Provides power to peripheral devices.
Scenario
Your company has just opened a small branch office nearby that requires several computers
to be networked on a LAN and to have Internet access. One of the computers is an older
unit that had a failed NIC. That card has been removed but a replacement was never
installed. You have located an appropriate NIC that you can install in a PCI slot on the PC’s
motherboard. You must travel to the branch office to install the card. You will need to have a
PCI card for this task as well as a screwdriver to remove the screws anchoring the side
access panel to the computer’s metal frame. Finally, you’ll need to take the appropriate
steps to prevent ESD damage to the sensitive electrical components inside the PC and the
new NIC.
Setup
For the general task of installing a PCI card, all you’ll need is a single computer with at least
one empty PCI slot, a PCI card, and a screwdriver that will fit the screws holding the side
access panel on the PC. PCI slot covers are also usually attached to the computer by
screws. You may need an additional screwdriver if the screw types for the access panel and
the PCI slots are different. Although the task scenario specifies a NIC, you can use any PCI
card to perform the actual task on your computer.
For the purpose of installing a NIC and testing network connectivity, you will need two
computers networked together through a switch or a router. Most home computer users with
two desktop computers have them networked using Ethernet cables connected to a switch.
The switch is connected to a DSL or cable modem, which acts as a DHCP server, assigning
the computers’ IP addresses. For the networking part of the task, this is the type of setup
that is required in order to test the success of installing the new NIC.
Procedure
In this task, you will learn how to install a PCI card into the PCI slot on a computer’s
motherboard. You will also learn how to determine if the card is functioning correctly once it’s
installed.
Equipment Used
You may need a flat-head screwdriver and/or a Phillips screwdriver, depending on the types
of screws holding the access panel and PCI slot cover to the computer.
Revision Questions
Answer the following questions by selecting the correct option(s).
1. Which of the following is focused on the server market, and therefore, unlikely to
be seen in desktop computers?
A. PCI Express
B. AGP
C. AMR
D. PCI-X
2. Which statement is correct regarding the PCI Express (PCIe) bus?
A. PCIe is backward compatible with PCI and PCI-X.
B. PCI Express is expected to quickly replace conventional PCI.
C. PCI Express uses a serial bus.
D. PCI Express buses were developed specifically for video cards.
3. What is used at each end of the SCSI chain to reduce the amount of electrical
“noise,” or interference on a SCSI cable?
A. Cable clip
B. Cable end cap
C. Connector apex
D. Terminating resistor
4. Which statement is correct regarding a PCI riser card?
A. Riser cards are available for PCIe and PCI-X slots; however, they are not available
for conventional PCI slots.
B. Riser cards are typically used for server computers to accommodate added storage
devices.
C. When you install an expansion card in a riser card slot, the card is positioned higher
than other cards on the motherboard.
D. A PCI riser card installs in an expansion slot and provides another slot at a right
angle.
5. USB 2.0 is an example of a type of bus that does not run in sync with the system
clock. What type of bus is this?
A. Local bus
B. Expansion bus
C. PCI Express
D. AGP bus
6. Which statement regarding PCI buses is correct?
A. The first PCI bus had a 32-bit data path, supplied 5 V of power to an expansion card,
and operated at 33MHz.
B. A universal PCI card is limited to using a 3.3 V slot.
C. PCI- X is not backward compatible with conventional PCI cards and slots.
D. PCI-X requires a 64-bit card.
7. Which type of slot contains a single lane for data, which is actually four wires?
A. PCI Express x4
B. PCI Express x1
C. PCI Express x16
D. PCI Express x8
8. What type of slot provides the fastest speed for a video card?
A. AGP
B. PCIe x1
C. PCIe x16
D. PCI-X
Introduction
This section introduces a number of basic peripheral installation concepts, such as cable
and connector types, as well as communication methods. Computer ports are connection
points or interfaces with other peripheral devices. A port is a connector on a motherboard
or on a separate adapter that allows a device to connect to a computer. Sometimes a
motherboard has ports built directly into it. Motherboards that have ports built into them are
called integrated motherboards.
There is a great variety of connector and port types among peripheral devices, and
different devices use different cable/connector combinations. For example, the straight-pair
cable for a printer has a different connector than the straight-pair cable for a monitor. It is
important to make the distinction between a connector and a port. Typically, the term
connector refers to the plug at the end of a cable, and port refers to its place of attachment
on a device. Even though we refer to connectors and ports, bear in mind that the port is
also a connector.
The following are the ports you are bound to come across as you work with different
computers.
Many port connections are either male or female. Male ports have metal pins that protrude
from the connector. A male port requires a cable with a female connector. Female ports
have holes in the connector into which the male cable pins are inserted.
A mini-DIN connector is round with small holes and is normally keyed. When a connector
is keyed, the cable can only be inserted one way. Keyboard and mouse connectors,
commonly called PS/2 ports, are examples of mini-DIN connectors. Today, a keyboard and
mouse can also be connected to USB ports. The figure below shows the back of a
computer with an integrated motherboard. You can see a DIN and two D-shell connectors
on the motherboard.
There are two main types of computer ports: physical and virtual. Physical ports are used
for connecting a computer trough a cable and a socket to a peripheral device.
Virtual ports are data gates that allow software application (network) to use hardware
resources without any interference. These computer ports ( network ports ) are defined by
IANA ( Internet Assigned Numbers Authority ) and are used by TCP ( Transmission Control
Protocol ), UDP ( User Datagram Protocol ), DCCP ( Datagram Congestion Control
Protocol ) and SCTP ( Stream Control Transmission Protocol ).
While the analog VGA-spawned standards might keep the computing industry satisfied for
years to come yet, the sector in the market driving development of non-VGA specifications
has become increasingly more prevalent. These high-resolution, high-performance junkies
approach video from the broadcast angle. They are interested in the increased quality of
digital transmission. For them, the industry responded with technologies like DVI and HDMI.
The computing market benefits from these technologies as well. DVI interfaces on graphics
adapters and laptops became commonplace. In increasingly more cases, HDMI interfaces
take adapters to the next generation.
Other consumers desire specialized methods to connect analog display devices by splitting
out colors from the component to improve quality or simply to provide video output to
displays not meant for computers. For this group, a few older standards remain viable:
component video, S-video, and composite video. The following sections present the details
of the different video display specifications.
DB-15
The DB-15 connector used on video adapters for connecting to traditional CRT monitors and
some LCD displays has three rows of five pins. The connector on the monitor cable is male,
while the connector on the video adapter (the computer end) is female. A male port or
connector has pins, and a female port or connector is a receiver with sockets for the pins of
The VGA port was designed for analog output. A mini-VGA port is also available on some
mobile devices.
DVI
A newer port is a DVI port (Digital Visual Interface), and it has three rows of square holes.
DVI ports are used to connect flat panel digital displays. Some flat panel monitors can also
use the older VGA port. Some video adapters also allow you to connect a video device (such
as a television) that has an S-Video port. The figure below shows a video adapter with all
three ports. The top port is for S-Video, the center port is the DVI connector, and the bottom
port is a VGA port.
The DVI-D and DVI-I connectors come in two varieties: single link and dual link. The dual-
link options have more conductors—taking into account the six center conductors—than
their single-link counterparts, which accommodate higher speed and signal quality. The
additional link can be used to increase resolution from 1920×1080 to 2048×1536 for devices
with a 16:9 aspect ratio or from WUXGA to WQXGA for devices with a 16:10 aspect ratio. Of
course, both components, as well as the cable, must support the dual-link feature. Consult
Chapter 4, “Display Devices,” for more information on display standards.
DVI-A and DVI-I analog quality is superior to that of VGA, but it is still analog, meaning it is
more susceptible to noise. However, the DVI analog signal will travel farther than the VGA
signal before degrading beyond usability. Nevertheless, the DVI-A and VGA interfaces are
pin-compatible, meaning that a simple passive adapter, as shown in the illustration below is
all that is necessary to convert between the two. As you can see, the analog portion of the
connector, if it exists, comprises the four separate color and sync pins and the horizontal
blade that they surround, which happens to be the analog ground lead that acts as a ground
and physical support mechanism even for DVI-D connectors.
HDMI is compatible with DVI-D and DVI-I interfaces through proper adapters, but HDMI’s
audio and remote-control pass-through features are lost. Additionally, 3D video sources work
only with HDMI. The figure below shows a DVI-to-HDMI adapter between DVI-D and the
Type A 19-pin HDMI interface. The first image is the DVI-D interface, and the second is the
HDMI interface on the other side of the adapter.
The figure below shows an XFX video card that has an S-Video connector on the far left, an
RCA jack, an HDMI connector, and a dual-link DVI-I connector.
RCA
RCA - (derived from Radio Corporation of America) - connector cables are a bundle of 2-3
cables including Composite Video (colored yellow) and Stereo Audio cables (red for right
channel and white or black for the left audio channel).
Male
Female
RCA Ports
The RCA cables are usually used for connecting DVD player, stereo speakers, digital
camera and other audio/video equipment to your TV. You can plug-in an RCA cable to the
computer via a video capture card and this will let you transfer video from an old analog
camcorder into your computer’s hard drive.
Component Video
When analog technologies outside the VGA realm are used for broadcast video, you are
generally able to get better-quality video by splitting the red, green, and blue components in
the signal into different streams right at the source. The technology known as component
video performs a signal-splitting function similar to RGB separation. However, unlike RGB
separation, which requires full-bandwidth red, green, and blue signals as well as a fourth
pathway for synchronization, the most popular implementation of component video uses one
uncompressed signal and two compressed signals, reducing the overall bandwidth needed.
These signals are delivered over coax either as red, green, and blue color-coded RCA plugs
or similarly coded BNC connectors, the latter being seen mostly in broadcast-quality
applications.
The uncompressed signal is called luma (Y), which is essentially the colorless version of the
original signal that represents the “brightness” of the source feed as a grayscale image. The
component-video source also creates two compressed color-difference signals known as Pb
and Pr. These two chrominance (chroma, for short) signals are also known as B – Y and R –
Y, respectively, because they are created by subtracting out the luma from the blue and red
source signals. It might make sense, then, that the analog technology presented here is
most often referred to and labeled as YPbPr. A digital version of this technology, usually
found on high-end devices, replaces analog’s Pb and Pr with Cb and Cr, respectively, and is
most often labeled YCbCr. The below figure shows the three RCA connectors of a
component video cable.
Therefore, you can conclude that by providing one full signal (Y) and two compressed
signals (Pb and Pr) that are related to the full signal (Pb = B – Y and Pr = R – Y), you can
transmit roughly the same information as three full signals (R, G, and B) but with less
bandwidth. Incidentally, component video is capable of transmitting HD video at full 1080p
(1920×1080, progressive-scan) resolution. However, the output device is at the mercy of the
video source, which often is not manufactured to push 1080p over component outputs.
S-video
S-video is a component video technology that, in its basic form, combines the two chroma
signals into one, resulting in video quality not quite as high as that of YPbPr. This is because
the R, G, and B signals are harder to approximate after the Pb and Pr signals have been
combined. One example of an S-video connector, shown in the figure below, is a 7-pin mini-
DIN, mini-DIN of various pin counts being the most common connector type. The most basic
connector is a 4-pin mini-DIN that has, quite simply, one luma and one chroma (C) output
lead and a ground for each. A 4-pin male connector is compatible with a 7-pin female
connector, both in fit and pin functionality. The converse is not also true, however. These are
the only two standard S-video connectors.
The 6-pin and 7-pin versions add composite video leads, which are discussed next. Some 7-
pin ports use the extra pins to provide full Y, Pb, and Pr leads with four ground leads, making
those implementations of S-video equivalent to component video. ATI has used 8-, 9-, and
10-pin versions of the connector that include such added features as an S-video input path
in addition to output (from the perspective of the video source), bidirectional pin functionality,
and audio input/output.
Composite Video
When the preceding component video technologies are not feasible, the last related
standard, composite video combines all luma and chroma leads into one. Composite video is
truly the bottom of the analog-video barrel. However, the National Television System
A single yellow RCA jack, the composite video jack is rather common on computers and
home and industrial video components. While still fairly decent in video quality, composite
video is more susceptible to undesirable video phenomena and artifacts, such as aliasing,
cross coloration, and dot crawl. If you have a three-connector cable on your home video
equipment, such as a DVD player connected to a TV, odds are the tips will be yellow, red,
and white. The red and white leads are for left and right stereo audio; the yellow lead is your
composite video.
DisplayPort
Another port that can send and receive audio and video signals is the DisplayPort
developed by VESA (Video Electronics Standards Association). The port is designed to
primarily output to display devices and can have a passive converter to be used to convert to
a single-link DVI or HDMI port. A mini DisplayPort is also available on mobile devices. You
use an active converter to convert to dual-link DVI.
DisplayPort is a royalty-free digital display interface that uses less power than other digital
interfaces and VGA. DisplayPort cables can extend 3 meters unless an active cable powers
the run, in which case the cable can extend to 33 meters.
The DisplayPort connector latches itself to the receptacle with two tiny hooks in the same
way that micro-B USB connectors do. The figure below shows an illustration of the
DisplayPort 20-pin interface. Note the keying of the connector in the bottom left of the
diagram.
Thunderbolt port
The full-size DisplayPort is being usurped by a smaller compatible version called
Thunderbolt, created in collaboration between Intel and Apple. Thunderbolt combines PCI
Express with the DisplayPort technology. The Thunderbolt cable is a powered active cable
extending as far as 3m and was designed to be less expensive than an active version of the
full-size DisplayPort cable of the same length.
Despite its diminutive size, the Thunderbolt port has 20 pins around its connector bar, like its
larger DisplayPort cousin. Of course, the functions of all the pins do not directly correspond
between the two interface types because Thunderbolt adds PCIe functionality. The port used
on Apple computers is the same connector as the mini DisplayPort. The figure below shows
a Thunderbolt port and cable.
Coaxial
Two main forms of coaxial cable are used to deliver video from a source to a monitor or
television. One of them is terminated by RCA or BNC plugs and tends to serve a single
frequency, while the other is terminated by F connectors, those seen in cable television
(CATV) settings, and tends to require tuning/demodulation equipment to choose the
frequency to display. The terms that refer to whether a single frequency or multiple
frequencies are carried over a cable are baseband and broadband, respectively. The figure
below shows an example of the F connector most commonly used in home and business
CATV installations. This is a 75-ohm form of coax known as RG-6.
Ethernet
With the capability of today’s data networks, both compressed and even uncompressed
audio and video can be digitized and sent over an IP network in packet form. The physical
and data-link connectivity is often implemented through devices that connect through a
standard Ethernet network. Care must be taken that this new application for the network
does not obstruct the normal flow of data or even the possibly recently added voice over IP
(VoIP) traffic. As with VoIP applications, quality of service (QoS) must be implemented and
supported throughout the data network or audio/video (A/V) quality will surely suffer.
Computer ports are interfaces that allow other devices to be connected to a computer. Their
appearance varies widely, depending on their function.
D-subminiature Connectors
This is a family of plugs and sockets widely used in communications and on earlier PCs. For
example, the analog VGA monitor interface uses a D-sub 15-pin plug and socket. Also called
"DB connectors" and "D-subs," they come in 9, 15, 25, 37 and 50 pin varieties. The D-sub
designation defines the physical structure of the connector, not the purpose of each line.
DB-25: The female DB-25 was widely used in the past for the printer port on a PC. The male
DB-25 was also the second serial port (COM2) on the PC when serial ports were popular. It
is still widely used for RS-232 communications devices.
DB-9 (DE-9): The male DB-9 connector (officially DE-9) was typically used for the first serial
port on earlier PCs (COM1) as well as other communications devices.
DB-15 (DA-15 and DE-15): Two DB-15 connectors are widely used. The larger, two-row
female DA-15 is the game port on a PC, and the smaller, three-row, female high-density DE-
15 is the VGA port.
SATA
Internal SATA storage devices have a 7-pin data connection and a 15-pin power connection.
Those connections sit next to one another on the SATA device.
SATA (Serial ATA) cables are used to connect high-speed SATA hard drives and optical
drives to the motherboard. SATA cables have only seven conductors and are therefore
much thinner than ribbon-type IDE cables, which improves airflow and makes them easier to
route inside the case. There are also eSATA cables that can be used to connect external
SATA drives to a computer.
SATA cables can be as long as one meter in length and are more rugged than IDE cables,
which provides for more flexibility in choosing where to mount hard drives. They're also
capable of very high data transfer rates -- as high as 300 MB/sec.
SATA devices also use a special power cable. Some SATA drives introduced during the
initial transitional period also accepted a standard ATX power connector, but all new power
supplies now include SATA connectors.
S
SATA Cable
eSATA Ports
SATA (serial AT attachment) is used for connecting storage devices such as hard drives or
optical drives. A 7-pin non-powered eSATA (external SATA) port is used to connect external
storage devices to computers at a maximum of approximately 6.6 feet or 2 meters. An
eSATA port is commonly found on laptops to provide additional storage. If the internal hard
drive has crashed, an external drive connected to an eSATA or USB port could be used to
boot and troubleshoot the system.
A variation of the eSATA port is the eSATAp port, which is also known as eSATA/USB or
power over eSATA. This variation can accept eSATA or USB cables and provides power
when necessary. Below are a standard eSATA port and an eSATAp (eSATA/USB
combination) port.
PATA
Parallel ATA (PATA) uses either 40-pin or 80-pin connectors located on a ribbon cable
inside the box. The 80-pin version appeared with the advent of Ultra-DMA. There are three
on the cable: one that connects to the motherboard, and two for devices. PATA is a drive
interface used by disk drives. The figure below shows a pair of PATA connectors on the
motherboard.
IDE/EIDE cables are used to connect older-style PATA hard drives and other PATA devices
to the computer's motherboard.
Traditionally, IDE cables were flat, gray, ribbon-type connectors. Older (ATA-33) IDE cables
had 40 conductors and forty pins. Newer ATA-133 EIDE cables have 80 conductors, but still
have forty pins. The colored stripe along one edge of the cable aligns with pin number one
on the device and motherboard connectors.
80-conductor EIDE cables have color-coded connectors:
The blue connector gets attached to the motherboard.
The black connector attaches to the master drive or device.
The gray connector attaches to the slave drive or device.
The drive positions on older, 40-conductor IDE cables can be determined by their relative
positions along the cable:
The off-center middle connector gets attached to the slave device.
The connector closest to the middle connector gets attached to the master device.
The connector farthest from the middle connector gets attached to the motherboard.
Floppy drive cables look a lot like IDE cables except that they are a little narrower, have only
34 conductors, and have a twist at the end of the cable that attaches to the drives. They may
have from two to five connectors: one to attach to the motherboard, and as many as four
drive connectors.
Ps: With the aid of the trainer, the students should discuss differences between PATA and
SATA Connecting Technologies.
USB Ports
USB stands for Universal Serial Bus. A USB port allows up to 127 connected devices to
transmit at speeds up to 5Gbps (5 billion bits per second). Devices that can connect to a
USB port include printers, scanners, mice, keyboards, joysticks, optical drives, tape drives,
game pads, cameras, modems, speakers, telephones, video phones, data gloves, and
digitizers. Additional ports can sometimes be found on the front of a PC case or on the side
of a mobile device.
USB ports and devices come in three versions—1.0/1.1, 2.0 (Hi-Speed), and 3.0
(SuperSpeed). USB 1.0 operates at speeds of 1.5Mbps and 12Mbps; version 2.0 operates
at speeds up to 480Mbps. Version 3.0 transmits data up to 5Gbps. The 3.0 USB port, which
still accepts older devices, is colored blue.
USB 3.0 is backward compatible with the older versions, which means that the cables from
any 1.0/2.0 device work with a 3.0 port. To achieve USB 3.0 speeds, however, a 3.0 device,
3.0 port, and 3.0 cable must be used. The version 1 and 2 cables used 4 wires. Version 3
cables use 9 wires. The figure below shows the different versions and speed symbols. Note
that the port is not required to be labeled, and sometimes looking at the technical
specifications for the computer or motherboard is the only way to determine port speed.
USB ports are known as upstream ports and downstream ports. An upstream port is used
to connect to a computer or another hub. A USB device connects to a downstream port.
Downstream ports are commonly known as Type A and Type B. A standard USB cable has
a Type A male connector on one end and a Type B male connector on the other end. The
port on the computer is a Type A port. The Type A connector inserts into the Type A port.
The Type B connector attaches to the Type B port on the USB device.
mini-USB Ports
A smaller USB port used on small devices such as USB hubs, PDAs, digital cameras, and
phones is known as a mini-USB port. There are several types of smaller USB ports: mini-A,
mini-AB, micro-B, and micro-AB. The mini-/micro-AB ports accept either a mini-/micro-A or a
mini-/micro-B cable end. Below is the figure of the standard Type A USB port found on a PC
The IEEE 1394 standard is a serial technology developed by Apple Computer. Sometimes it
is known as FireWire (Apple), i.Link (Sony), or Lynx (Texas Instruments). IEEE 1394 ports
have been more predominant on Apple computers but are also seen on some PCs.
Windows and Apple operating systems support the IEEE 1394 standard. Many digital
products have an integrated IEEE 1394 port for connecting to a computer. IEEE 1394
devices include camcorders, cameras, printers, storage devices, video conferencing
cameras, optical players and drives, tape drives, film readers, speakers, and scanners.
IEEE 1394 has two data transfer modes—asynchronous and isochronous. The
asynchronous mode focuses on ensuring that data is delivered reliably. Isochronous
transfers allow guaranteed bandwidth (which is needed for audio/video transfers) but does
not provide for error correction or retransmission.
Speeds supported are 100, 200, 400, 800, 1200, 1600, and 3200Mbps. IEEE 1394 devices
commonly include the speed as part of their description or name; for example, a FireWire
800 device transfers at speeds up to 800Mbps. With FireWire, as many as 63 devices (using
cable lengths up to 14 feet) can be connected (daisy chained). The IEEE 1394 standard
supports hot swapping, plug-and-play, and powering of low-power devices.
An IEEE 1394 cable has 4, 6, or 9 pins. A 4-pin cable/connector does not provide power, so
the device must have its own power source. The 6- and 9-pin connectors do provide power.
A 6-pin connector is used on desktop computers and can provide power to the attached
IEEE 1394 device. A 9-pin connector is used to connect to 800Mbps devices that are also
known as IEEE 1394b devices. The figure below shows an IEEE 1394 port found on PCs, a
mini port found on mobile devices, and a 9-pin port found on 800Mbps IEEE 1394 devices.
FireWire ports
An IEEE 1394 device can connect to a port built into the motherboard, an IEEE 1394 port on
an adapter, another IEEE 1394 device, or a hub. A motherboard might have pins to connect
additional IEEE 1394 ports. IEEE 1394 does not require a PC to operate; two IEEE 1394
devices can communicate via a cable. The IEEE 1394 bus is actually a peer-to-peer
standard, meaning that a computer is not needed. Two IEEE 1394–compliant devices can
be connected (for example, a hard drive and a digital camera), and data transfer can occur
across the bus.
IEEE 1394c devices transmit at 800Mbps, but instead of using a 9-pin connector, they have
an RJ-45 connector, like an Ethernet port. The IEEE 1394d standard uses a fiber
connection. The table below provides a summary of the different IEEE 1394 standards.
Below are various USB and IEEE 1394 connectors. The two leftmost connectors are mini-B
and standard A USB connectors. The three connectors on the right are 6-, 4-, and 9-pin
IEEE 1394 cables.
Labs:
This is a special high-speed port that allows you to attach SCSI peripherals such as disk
drives and printers. Currently, SCSI interfaces, either on the motherboard or as add-on
cards, are found primarily in servers and are used for mass storage (hard disk and tape
backup), although you might encounter workstations or older PCs that use SCSI interfaces
for devices such as
• High-performance and high-capacity hard drives
• Image scanners
• Removable-media drives such as Iomega Zip and Jaz
• High-performance laser printers
• Optical drives
• Tape backups
SAS (serial attached SCSI) is a newer type of SCSI that transmits at much faster speeds the
parallel SCSI. Depending on the type of SCSI, you can daisy chain up to either 7 or 15
devices together. Some computers have a slot that supports a SCSI card.
When a SCSI host adapter card with internal and external connectors is used, the SCSI
daisy chain can extend through the card. The devices on each end of the chain are
terminated, and each device (including the host adapter) has a unique device ID number.
SCSI Standards
SCSI actually is the family name for a wide range of standards, which differ from each other
in the speed of devices, number of devices, and other technical details. The major SCSI
standards are listed below.
The standard markings used to identify SE, LVD, and LVD-SE SCSI devices are shown
below. LVD-SE devices can be used with either SE or LVD devices on the same daisy chain.
SCSI Cables
Just as no single SCSI standard exists, no single SCSI cabling standard exists. In addition to
the 50-pin versus 68-pin difference between standard and wide devices, differences also
appear in the Narrow SCSI external cables. Below is a comparison of internal SCSI cables
for wide and narrow applications, and the various types of external SCSI cables and ports.
Wide (68-pin) and narrow (50-pin, 25-pin) SCSI cable connectors (left) and
the corresponding SCSI port connectors (right)
PS/2 ports were the previous standard before USB for keyboards and mice. They were no
faster than Serial ports but were primarily created to help users by assigning a different
shape to the port used for your keyboard (and mouse). This new plug was created by IBM
when they launched their PS/2 line of computers in 1987, and while their concept of
proprietary hardware failed to survive, many of their design ideas (including the color coded
PS/2 ports for mice and keyboards) eventually became the industry standard. The technical
term for the PS/2 keyboard interface is a Mini-DIN 6 plug (which replaced the previous
standard which was the DIN 5 plug, known as the AT connector). The Mini-DIN 6 plug for
the mouse replaced the 9 pin Serial port connector (see below).
Serial
port
Serial plug
Most computers manufactured prior to 2006 came with 2 PS/2 ports, a purple one for the
keyboard and a green one for the mouse. Computers built more recently typically no longer
have PS/2 ports on them, although some manufacturers are still supporting this standard, as
there is no loss of performance in using a PS/2 interface for keyboards and mice. If you need
more than one PS/2 port (i.e. for 2 PS/2 keyboards), you have to purchase a PS/2 splitter or
a KVM switch.
Parallel Ports
Parallel ports were used to connect older printers to computers. Some motherboards have a
small picture of a printer etched over the connector. Historically, the parallel port has been
among the most versatile of I/O ports in the system because it was also used by a variety of
devices, including tape backups, external CD-ROM and optical drives, scanners, and
Serial Ports
A serial port (also known as a COM port, an RS-232 port, or an asynchronous [async] port)
is a type of interface that connects a device by transmitting data one bit at a time. It can be a
9-pin male D-shell connector or a 25-pin male D-shell connector (on old computers). Serial
ports are used for external analog modems, printers, and some networking equipment.
Serial ports, like parallel ports, are obsolete, having been replaced by USB ports. Serial
ports have been used to connect the following:
• External analog (dial-up) modems
Figure 1 shows two types of serial port markings. Figure 2 shows a USB-to-serial port
converter you can use if a serial port is needed and only USB ports are available. You can
purchase converters to convert almost any other type of port to a USB port.
To adjust the configuration of a serial port built into the system’s motherboard, follow these
steps:
1. Start the BIOS setup program.
2. Go to the peripherals configuration screen.
3. Select the serial port you want to adjust.
4. To change the port’s configuration, choose the IRQ and I/O port address you want to
use or select Disabled to prevent the system from detecting and using the serial port.
If you don’t use serial ports, select Disabled.
A typical BIOS I/O device configuration screen with the first serial port enabled, the
second port disabled, and IR (infrared) support disabled.
5. Save changes and exit; the system reboots.
Because serial port send data down one wire, bit-by-bit, this eliminates the problem of data
skew associated with parallel ports and so serial data leads (cable) can be longer than
parallel leads by up to around 50ft at reasonable speeds. Serial ports are fairly electrically
robust, but if they are used with very long cables there is a risk that electrical spikes can find
their way through the cable, into the port and then blow the input/output driver chips. If this
happens it may be possible to replace the chips if they are socketed, otherwise a new I/O
card or motherboard may be required.
As well as data transmission lines, serial ports have control signal lines that can manage
(stop-start) data flow and signal certain events to the host device (PC).The serial port on a
PC is based on a data communication standard known as ‘EIA-RS232-C’, usually
abbreviated as just RS232. This standard defines an ‘Asynchronous communication port’.
Most PCs use a male DB9 9-pin connector for at least one of their ports – these 9-pin
connectors support the major data and handshake (data flow control) lines needed for mice
and modems but do not include the ‘advanced’ signals required for some more specialist
devices.
The EIA RS232-C standard attempts to make serial port wiring simpler to understand by
splitting equipment with serial ports into two categories:
DTE (Data Terminal Equipment)
DCE (Data Circuit-Terminating Equipment)
DTE (Data Terminal Equipment)
A PC is classed as DTE equipment, as are most data terminals and other devices, which
interact with users. DTE is an end instrument that converts user information into signals
or reconverts received signals. These can also be called tail circuits. A DTE device
Audio Ports
A sound card converts digital computer signals to sound and sound to digital computer
signals. A sound card is sometimes called an audio card and can be integrated into the
motherboard or be on an adapter that contains several ports. The most common ports
include a port for a microphone, MP3 player, or other audio device. One or more ports for
speakers, a headphone port, and S/PDIF (Sony/Philips Digital Interface) in/out ports are
used to connect to various devices, such as digital audio tape players/recorders, DVD
players/recorders, and external disc players/recorders. There are two main types of S/PDIF
connectors: an RCA jack used to connect a coaxial cable and a fiber-optic port for a
TOSLINK cable connection.
Sound cards are popular because people want better sound quality than what is available
integrated into a motherboard.
To avoid confusion, most recent systems and sound cards use the PC99 color coding listed
as follows:
• Pink—Microphone in
• Light blue—Line in
• Lime green—Stereo/headphone out
• Brown—Left-to-right speaker
• Orange—Subwoofer
Audio Cables
There are two main types of audio cable we will look at: Single core / shielded (unbalanced)
and One pair / shielded (balanced).
RJ Ports
Registered jack (RJ) connectors are most often used in telecommunications. The two most
common examples of RJ ports are RJ-11 and RJ-45. RJ-11 connectors are used most often
on flat satin cables in telephone hookups; your home phone jack is probably an RJ-11 jack.
The ports in older external and internal analog modems are RJ-11.
RJ-45 connectors, on the other hand, are larger and most commonly found on Ethernet
networks that use twisted-pair cabling. Your Ethernet NIC likely has an RJ-45 jack on it.
Although RJ-45 is a widely accepted description for the larger connectors, it is not correct.
Generically speaking, Ethernet interfaces are 8-pin modular connectors, or 8P8C
connectors, meaning there are eight pin positions, and all eight of them are connected, or
used. RJ-45 specifies the physical appearance of the connector and also how the contacts
are wired from one end to the other. Surprisingly the RJ-45 specification does not match the
TIA T568A and T568B wiring standards used in data communications.
The figure below shows an RJ-11 jack on the left and an RJ-45 jack on the right. Notice the
size difference. As you can see, RJ connectors are typically square with multiple gold
contacts on the flat side. A small locking tab on the other side prevents the connector and
cable from falling or being pulled out of the jack casually.
Network Ports
Network ports are used to connect a computer to other computers, including a network
server. The most common type of network port is an Ethernet port. A network cable inserts
into the Ethernet port to connect the computing device to the wired network. A network port
or an adapter that has a network port is commonly called a NIC (network interface
card/controller).
Ethernet adapters commonly contain an RJ-45 port that looks like an RJ-11 phone jack, but
the RJ-45 connector has 8 conductors instead of 4. UTP (unshielded twisted pair) cable
connects to the RJ-45 port so the computing device can be connected to a wired network.
AnRJ-45 Ethernet port can also be found on external storage devices. A storage device
could be cabled to the wired network in the same fashion as the PC. Below is an Ethernet
NIC with an RJ-45 port.
Modem Ports
A modem connects a computer to a phone line. A modem can be internal or external. An
internal modem is an adapter that has one or two RJ-11 connectors. An external modem is a
separate device that sits outside the computer and connects to a 9-pin serial port or a USB
port. The external modem can also have one or two RJ-11 connectors. The RJ-11
connectors look like typical phone jacks. With two RJ-11 connectors, one can be used for a
telephone and the other has a cable that connects to the wall jack. The RJ-11 connector
labeled Line is for the connection to the wall jack. The RJ-11 connector labeled Phone is for
the connection to the phone. An internal modem with only one RJ-11 connector connects to
the wall jack.
Bluetooth technology uses radio waves to transmit data between devices. Bluetooth devices
have to be within 33 feet of each other. Any computers, peripherals, smart phone, PDAs,
cars and other consumer electronics re Bluetooth enabled meaning that they contain a small
chip that allows them to communicate with other Bluetooth – enabled computers and
devices.
If you have a computer that is not Bluetooth-enabled, you can purchase a Bluetooth wireless
port adaptor that will convert an existing USB port into a Bluetooth port. Also available are
Bluetooth PC cards and Express card modules for traditional notebook computers and tablet
pcs, and Bluetooth cards for smart phones and PDAs.
IrDA Port
Some devices can transmit data via infrared light waves. For these wireless devices to
transmit signals to a computer, both the computer and the device must have an IrDA port.
These ports conform to the standards by the IrDA (Infrared Data Association).
To ensure nothing obstructs the path of the infrared wave, you must align the IrDA port on
the computer, similarly to the way you operate a television remote control. Devices that use
IrDA ports include a smart phone, PDA, keyboard, mouse and printer. Several of these
devices use a high speed IrDA port, sometimes called fast infrared port.
Exam Essentials
Identify the characteristics of connector types found on most PCs. These include but
are not limited to USB, SATA, eSATA, FireWire, serial, parallel, VGA, HDMI, DVI, audio,
RJ-45, and RJ-11.
Describe the difference in the operation of VGA and HDMI transmission. VGA
connections and cables are analog in nature whereas HDMI is an interface for transmitting
digital data.
List the speed, distance, and frequency of wireless connections. For device
connections, Bluetooth offers up to 2.1 Mbps at about 35 feet, and infrared transmits at 4
Mbps at 5 meters. For networking, 802.11b operates at 11 Mbps, 802.11g at 54 Mbps, and
802.11n at up to 600 Mbps. The 802.11a maximum indoor range is 100 meters, or 300
feet, and maximum outdoor range is 350 meters, or 1,200 feet. The 802.11b maximum
indoor range is 150 meters, or 492 feet, and the maximum outdoor range is 500 meters, or
1,640 feet. The 802.11g maximum indoor range is 150 meters, or 492 feet, and the
maximum outdoor range is 500 meters, or 1,640 feet.
5. When considering VGA, HDMI, RGB/component, DVI, and DisplayPort, which video
port can output both digital audio and video signals and is the most
technologically advanced?
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
STORAGE MEDIA
Objectives
Introduction
Drive Interface Types
Magnetic Storage Media
Hard Disk Drives
RAID
Tape Drives
Floppy Disk Drives
Optical Disk Drives
High Capacity Optical Disks
Solid State Drives
Labs
Introduction
The device that actually holds the data is known as the storage medium. The device that
saves data onto the storage medium, or reads data from it, is known as the storage device.
The function of all storage devices is to hold, or store, information, even when the
computer’s power is off. Unlike information in RAM, files kept on a storage device remain
there unless the user or the computer’s operating system removes or alters them.
Many types of storage devices are available, including floppy drives, hard drives, optical (CD
and DVD) drives, tape drives, USB and IEEE 1394 external hard drives, and solid-state
devices. Note that this list includes both removable and fixed (non-removable) devices.
Sometimes the storage medium is a fixed (permanent) part of the storage device, e.g. the
magnetic coated discs built into a hard drive. Sometimes the storage medium is removable
from the device, e.g. a CD-ROM can be taken out of a CD drive.
Many types of mass storage devices are available, including those that store data on
magnetic media, devices that use optical technologies, and devices based on solid-state
technology.
Drive interfaces can be divided into two categories: external and internal. The CompTIA A+
Certification exam classifies network interfaces in the external drive interfaces category.
External drive interfaces, such as eSATA and FireWire (IEEE 1394), are used primarily by
hard disk drives. USB 2.0 and USB 3.0 interfaces can be used by hard disk drives as well as
optical drives, flash memory card readers, and flash memory thumb drives.
SCSI can be used by hard disk drives, RAID arrays, tape backups, and other devices for
both internal and external devices.
The floppy interface, when present, is currently used only by floppy drives, although low-
capacity tape backup drives have also used it in the past. Many motherboards now omit
floppy interfaces.
Floppy
Interface
A typical late-model motherboard with one PATA interface, one floppy interface,
and six SATA interfaces.
A high-performance ATX motherboard’s I/O ports, complete with legacy PS/2 mouse
/keyboard combo port, six USB 2.0, two USB 3.0, one IEEE 1394, two Ethernet, two
eSATA, and audio ports.
To eject a USB or FireWire drive safely, open the Safely Remove Hardware dialog available
from the notification area in Windows and follow the prompts to select and eject the drive.
Windows optimizes flash memory cards and USB flash memory drives for quick removal. To
safely remove a flash memory card from a card reader, close any Explorer window
displaying the card’s contents or navigate to a different folder. Check the status light to make
sure the card is not being accessed, and then remove the card.
The three main types of magnetic storage are hard disk drives, floppy disk drives, and tape
drives. By far the most common is the hard disk drive; this is where the operating system is
normally stored. Users also store frequently accessed data on the hard drive as well, such
as Word documents, music, pictures, and so on.
Floppy drives are not part of the bulk of today’s computers, due to their small capacity.
However, in special cases they might be needed by the user, perhaps to access older data
and programs. The technician, however, uses the floppy drive to boot systems with special
startup and analysis disks. Tape drives are used for archival, the long-term backup of data
that is not accessed often. Because floppy drives and tape drives are far less common, let’s
begin with hard drives.
Magnetic storage media and devices store data in the form of tiny magnetised dots. These
dots are created, read and erased using magnetic fields created by very tiny
electromagnets.
In the case of magnetic tape the dots are arranged along the length of a long plastic strip
which has been coated with a magnetisable layer (audio and video tapes use a similar
technology).
Hard disk drives (HDDs) are the most common of magnetic media. They are nonvolatile,
which means that any information stored on them cannot be lost when the computer is
turned off. They are not as fast as RAM but are faster than most other storage mediums
available; this makes them a good choice to store permanent data that is accessed
frequently.
This hard drive, like all internal hard drives, has a data connector and power connector. On
this particular drive, the data connector attaches to the motherboard (or expansion card) by
way of a 7-pin cable. The power connector attaches to the power supply by way of a 15-pin
power cable. Regardless of the type of hard drive, always make sure that the data and
power cables are firmly connected to it.
PATA
PATA (Parallel ATA) hard drives come in at a maximum data throughput of 133 MB/s. PATA
was the traditional hard drive and the standard for many years but has been all but phased
out by SATA hard drives on most new computers; however, it is still on the CompTIA A+
objectives. PATA hard drives are often referred to as Ultra ATA drives and sometimes as
IDE drives. They transfer data in parallel, for example 16 bits (2 bytes) at a time.
Internal PATA hard drives use the Integrated Drive Electronics (IDE) interface to transmit
data to and from the motherboard. Every IDE port on a motherboard can have up to two
drives connected to it. For a long time, motherboards would be equipped with two IDE ports,
enabling for a maximum of four IDE devices. However newer motherboards often come with
only one, limiting you to two IDE devices.
The IDE ports on the motherboard and the hard drive manifest themselves as 40-pin
connectors to which you can connect either a 40 or 80-wire ribbon cable. Newer IDE cables
are all 80-wire; however, they look identical to the older 40-wire versions, except for the blue
connector on one end that you find on many 80-wire cables. The cable has three
connectors, one for the controller (often blue), one for the master drive (often black), and a
connector in the middle of the cable for the slave drive (often gray). We talk more about
master/slave configurations in a little bit. The IDE port on the hard drive is keyed for easy
PATA hard drives accept a 4-pin Molex power connector from the computer’s power supply.
The Molex connector is keyed so that it is easier to orient when connecting to the hard drive.
This power cable has four wires: Red (5 V), Black (ground), Black (ground), and Yellow (12
V).
There is a jumper block in between the power and data connectors. This enables you to
select the configuration of the hard drive. There are usually four options, which are often
labeled on the drive:
Single: In a single drive configuration, no jumper shunt is needed. If you want, you can
connect the jumper horizontally across two pins. Although this does not configure the
drive in any way, it keeps the jumper handy for future use.
Master: Each of the motherboard’s IDE connections enables for two drives. In a two-
drive configuration on a single IDE cable, one must be set to drive 0 (master), and one
must be set to drive 1 (slave). To set a drive to master, connect the jumper vertically to
the correct pair of pins (for example, Western Digital drives use the center location as
illustrated in the figure above(jumper configuration block) and connect the black end
connector of the IDE ribbon cable to the hard drive. The master hard drive is normally
where the operating system would go.
Slave: To set a drive to slave, connect the jumper vertically to the correct pair of pins (for
example, Western Digital drives use the second position from the right), and connect the
gray, middle connector of the IDE cable to the hard drive. The slave hard drive is where
the bulk of the data would usually be stored.
Cable Select: This drive mode automatically configures the drive as master or slave
according to where you connect it to the IDE cable. This might be marked on the drive as
CS.
SATA
Coming in at a whopping maximum data rate of 600 MB/s is SATA (Serial ATA). These
drives are the most-common hard drives in use today. A Serial ATA drive transmits serial
streams of data (one bit at a time) at high-speed over two pairs of conductors and can do so
in full duplex, meaning it can send and receive simultaneously. Because SATA and PATA do
not interfere with each other, you can run both simultaneously; however, by default one will
not mate to the other; they are not compatible without a converter board or adapter.
For power, the SATA drive utilizes a 15-pin power connector, as shown below. The hard
drive’s power connector has a vertical tab at the right side, making for easier orientation
when connecting the power cable. Power supplies send 3.3 V, 5 V, and 12 V to the SATA
drive via orange, red, and yellow cables respectively. Your power supply must be equipped
with this power cable to support SATA drives; otherwise, you need an IDE 4-pin to SATA 15-
pin power adapter. Because IDE doesn’t use the orange 3.3 V wire, this is omitted in IDE-
SATA power adapters.
SATA Standards
SCSI
Small Computer System Interface (SCSI) hard drives are often used in servers and power
workstations that need high data throughput. You can identify a SCSI drive by the different
(and usually louder) sound it makes compared to ATA drives. SCSI standards describe the
devices, controllers, cables, and protocols used to send data. Part of the beauty of SCSI is
that you can have up to 16 devices including the controller. They can be internal, external, or
both. For the longest time SCSI was a parallel technology, but of late serial versions such as
Serial Attached SCSI (SAS) have emerged.
RAID
RAID is a method for creating a faster or safer single logical hard disk drive from two or more
physical drives. RAID (redundant array of independent disks; originally redundant array of
inexpensive disks) is a way of storing the same data in different places (thus, redundantly)
on multiple hard disks. By placing data on multiple disks, I/O (input/output) operations can
overlap in a balanced way, improving performance. Since multiple disks increase the mean
time between failures (MTBF), storing data redundantly also increases fault tolerance.
A RAID appears to the operating system to be a single logical hard disk. RAID employs the
technique of disk striping, which involves partitioning each drive's storage space into units
ranging from a sector (512 bytes) up to several megabytes. The stripes of all the disks are
interleaved and addressed in order.
RAID 0
This is the lowest level of the RAID and actually does not offer any form of redundancy
which is why it is referred to a level 0. Essentially, RAID 0 takes two or more drives and puts
them together to fashion a larger capacity drive. This is achieved through a process called
striping. Data blocks are broken up into data chunks and then written in order across the
drives. This offers increased performance because the data can be written simultaneously to
the drives by the controller effectively multiplying the speed of the drives.
In order for RAID 0 to work effectively for boosting the performance of the system, you need
to try and have matched drives. Each drive should have the same exact storage capacity
and performances traits. If they do not, then the capacity will be limited to a multiple of the
smallest of the drives and performance to the slowest of the drives as it must wait for all the
stripes to be written before moving to the next set.
The biggest problem with RAID 0 setup is data security. Since you have multiple drives, the
chances of corruption of data increased because you have more points of failure. If any drive
in a RAID 0 array fails, all of the data becomes inaccessible. Hence, don’t use striping for
data drives.
RAID0 solutions are cheap, and RAID0 uses all the disk capacity. If RAID0 controller fails,
you can do RAID0 recovery relatively easy using RAID recovery software. However you
should keep in mind that if the disk failure happens, data is lost irreversibly.
You may have as many pairs of mirrored drives as your RAID controller allows. And in the
unlikely event that said consumer-grade controller supports duplex reading, RAID 1 can
provide an increase in read speeds by fetching blocks alternately from each drive.
RAID 5 uses approximately one-third of the available disk capacity for parity information, and
requires a minimum of three disks to implement. Since data is read from multiple disks,
performance can improve under RAID 5, though some users report that RAID 5 slows
performance greatly when it’s processing multiple reads in a server situation. RAID 0 is
suitable for use with program and data drives.
If RAID5 controller fails, you can still recover data from the array with RAID 5 recovery
software. Unlike RAID0, RAID5 is redundant and it can survive one member disk failure.
Read speed of the N-drive RAID10 array is N times faster than that of a single drive. Each
drive can read its block of data independently, same as in RAID0 of N disks. Writes are two
times slower than reads, because both copies have to be updated. As far as writes are
concerned, RAID10 of N disks is the same as RAID0 of N/2 disks.
Half the array capacity is used to maintain fault tolerance. In RAID10, the overhead
increases with the number of disks, contrary to RAID levels 5 and 6, where the overhead is
the same for any number of disks. This makes RAID10 the most expensive RAID type when
scaled to large capacity.
If there is a controller failure in a RAID10, any subset of the drives forming a complete
RAID0 can be recovered in the same way the RAID0 is recovered.
Tape Drives
Although tape drives are primarily used by servers rather than desktops, they are also
considered to be removable storage devices. A tape drive is a data storage device used for
backing up data from computers. It uses special removable magnetic data tapes. People
often choose tape drives for data backup because the media is relatively inexpensive and
long lasting for archiving data. Because data must be stored sequentially on tape, it has a
very slow seek time for accessing individual files. However, newer tape drives can write data
to tape at transfer rates that compare well to hard drive speeds.
While tape drives are available in the consumer market, they most often back up large
server systems, in which case specialized equipment can be used to combine tape drives
with auto-loaders that automatically select tapes for use and store filled tapes in tape
libraries. High-end tapes can store hundreds of gigabytes per tape.
Tape drives use various types of magnetic tape. Their capacities are typically listed in two
ways:
Native (uncompressed) capacity
Compressed capacity, assuming 2:1 compression
For example, a tape drive with a 70GB native capacity would also be described as having a
140GB compressed capacity. However, keep in mind that, depending on the data being
backed up, you might be able to store more or (more typically) less than the listed capacity
when using data compression.
Most tape drives connect via SCSI interfaces, but some internal drives also connect via
PATA or SATA interfaces, and some external tape drives use USB or FireWire interfaces.
Most tape drives are also available in autoloader or tape library forms to permit backup
automation, enabling unattended backup of drives that require multiple tapes.
A+ Exam Tip: The A+ 220-801 exam expects you to know how to install a tape drive and
how to select the right tapes for the drive.
Floppy Drives
Floppy drives use flexible magnetic media protected by a rigid plastic case and a retractable
shutter. There have been three different types of 3.5-inch floppy disk media used over time,
although only the 1.44MB floppy disk is used currently. The figure below compares the
capacities and distinguishing marks of each disk type. Note that all 3.5-inch disks use the
write-enable slider as shown in the figure.
Floppy disk drives (FDD) are not nearly as common as they were 10 years ago; in fact, most
computers don’t come with a floppy drive. However, some businesses might need them to
access older data or programs, and technicians use them to start up a computer with special
boot disks.
A 3.5-inch floppy disk drive, also referred to as a FDD, reads data from a removable floppy
disk and provides a good method for transferring data from one machine to another.
A floppy disk contains a thin internal plastic disk, capable of holding magnetic charges. A
hard plastic protective casing, part of which retracts to reveal the storage medium inside,
surrounds the disk. A thin layer of magnetic materials coats the disk. The back of the disk
has a coin-sized metal circle used by the drive to grasp the disk and spin it.
Inserting a floppy disk into a computer’s drive causes the drive to spin the internal disk and
retract the protective cover. An articulated arm moves the drive’s two read/write heads back
and forth along the exposed area, reading data from and writing data to the disk. Each head
reads and writes to one side of the disk.
Floppy drives are fast disappearing from PCs. Most new consumer models do not have built-
in floppy drives. If you need one, consider buying an inexpensive external USB floppy drive,
like that pictured below.
3.5-inch floppy disks can hold 1.44 MB (high density) or 2.88 MB (extra high density) of
information. The most commonly used such disk is the 1.44 MB capacity disks. Floppy
drives are limited in the types of disks they can access. A 1.44 MB drive can access either a
1.44 MB or the old format called double density, which had a capacity of only 720 KB per
disk. A 2.88 MB floppy drive can read all three 3.5-inch disk densities.
A+ Exam Tip: The A+ 220-801 exam expects you to be able to install and configure a
floppy disk drive (FDD).
Zip Disc
A zip disc is a removable and portable storage medium, similar in appearance to a floppy
disk, but with a much higher capacity (100MB, 250MB or 750MB). Zip discs are random
access devices which were used for data back-up or moving large files between computers.
Another obsolete storage device, zip discs were a popular replacement for floppy discs for a
few years, but they never caught on fully before being superseded by cheaper media like
CD-ROMs and CD-Rs.
Jaz Disc
Jaz discs are removable and portable storage media based on hard-drive technology, with a
large capacity (1GB or 2GB). Jaz discs are random access devices which were used for
data back-up or moving large files between computers. Discs were expensive to buy and
not very reliable. Like the Zip disc, this system never really caught on and was superseded
by far cheaper and more reliable and cheaper technology.
An optical disk drive (ODD) uses a laser light to read data from or write data to an optical
disc. These include CDs, DVDs and Blu-ray discs (BDs). This allows you to play music or
watch movies using pre-recorded discs. Computer software also often comes on one of
these discs so you need an optical drive to install software. Most modern drives allow you to
The optical discs are of different types and this defines what you can do with that type of disc
or that type of drive.
ROM: Read-Only Memory. You cannot write to a -ROM disc, which left the factory
with data already on it. A -ROM drive can read discs but not write to them, and has
no use at all for a blank disc.
R: Recordable. You can write to one of these discs once (provided you have an -R
drive). But when you're done, it's effectively a -ROM disc.
RW: Rewritable. Another stupid acronym, that always suggested "read and write" to
me. You can write to these discs, erase them, and write to them again.
RE: Recordable Erasable. The Blu-ray variation of -RW, with a far more sensible
acronym.
The CD (compact disc) discs sold at retail containing music (audio CDs) or software (data
CDs) are CD-ROM (Compact Disc–Read-Only Memory) discs, meaning that they are only
readable; you cannot change the contents. There are other CD media and drive types, a few
differences in capacities, and as the technology has matured, the drives have become
faster.
CD Drive Speeds
The first CD drives transferred data at 150 KBps, a speed now called “1x.” CD drives are
now rated at speeds that are multiples of this speed and have progressed through 72x,
which is 10,800 KBps. The appropriate name, such as CD-ROM, CD-R, and CD-RW, will
describe a CD drive, and it may be followed by the speed rating, which may be a single
number, in the case of a CD-ROM, or, in the case of a CD-RW, will be three numbers such
as 52x24x16x. In this case, the drive is rated at 52x for reads, 24x for writes, and 16x for
rewrites.
Video storage uses digital versatile discs (DVDs) extensively, and they have grown in use for
data storage.
In addition, the format on each side may be single-layer (SL) or dual-layer (DL). While the SL
format is like that of CDs in that there is only a single layer of pits, DL format uses two pitted
layers on each data side; each layer having a different reflectivity index.
The DVD package label shows the various combined features as DVD-5, DVD-9, DVD-10,
and DVD-18. Therefore, when purchasing DVD discs, the labeling is very important to
understand the capacity of the discs. The table below shows DVD capacities based on the
number of data sides and the number of layers per side.
When it comes to selecting DVD discs based on the ability to write to them, you have a
selection equivalent to CD discs. The DVD discs sold at retail containing video or software
are DVD-ROM discs, meaning that they are only readable; you cannot change the contents.
DVD-ROM has a maximum capacity of 15.9 GB of data. Similarly, some DVD drives in PCs
can only read DVDs.
There are six standards of recordable DVD media. The use of the “dash” (-) or “plus” (+) has
special significance. The dash, used in the first DVD recordable format and seen as DVD-R
and DVD-RW, indicates an older standard than those with the plus. DVD-R and DVD-RW
are generally compatible with older players.
DVD-R and DVD+R media are writable much like CD-Rs. DVD-RAM, DVD-RW, and
DVD+RW are writable and rewritable much like CD-RW. When shopping for a DVD drive or
a PC that includes a DVD drive, you will see the previously described types combined as
DVD+R/RW, DVD-R/RW, and simply DVD-RAM. Drives will also be labeled with the
combined + and -, (DVD±R/RW) showing that all types of DVD discs can be used.
For instance: “DVD +R/RW 40x24x40x” indicates the three speeds of this drive for reading,
writing, and rewriting because each drive has a different potential speed for each type of
operation.
However, you need to read the manufacturer’s documentation to know the order. As a
general rule on DVD drives, reads are fastest, writes may be as fast, or a bit slower than
reads, and rewrites are the slowest operation.
DVD Drive Speeds and Data Transfer Rates Compared to CD Drive Speeds.
Blu-ray
As of 2011, Blu-ray drives are the latest optical drives available in the commercial market.
Sony was one of the founding proponents of developing Blu-ray technology during the early
2000s. Blu-ray drives are typically reserved for devices with high-definition display
capabilities, including high-end computers and the PlayStation 3 video game console. Blu-
ray drives and disks can process extremely large amounts of data: dual-sided Blu-ray disks
can contain more than 50 gigabytes of data. Blu-ray drives also come in mini-disks (4 cm
smaller in circumference than standard Blu-ray drives) but can process lesser amounts of
data.
The 'Blu' part of Blu-Ray refers to the fact that the laser used to read the disc uses blue light
instead of red light. Blue light has a shorter wave-length than red light (used with CDs and
DVDs). Using a blue laser allows more data to be placed closer together on a Blu-Ray disc,
HD DVD
High-density DVD (HD-DVD) discs can hold around 15GB of data (a dual-layer HD-DVD can
hold twice that). HD-DVDs are random-access devices. HD-DVD discs are used in the
same way as DVD-ROMs but, since they can hold more data, they are also used to store
very high-quality, high-definition (HD) video. The HD-DVD format was launched at the
same time as Blu-Ray. For about a year they competed to be the 'next DVD'. For various
reasons, Blu-Ray won the fight, and the HD-DVD format has been abandoned.
The term ‘solid-state’ essentially means ‘no moving parts’. Solid-state storage devices are
based on electronic circuits with no moving parts (no reels of tape, no spinning discs, no
laser beams, etc.). Solid-state storage devices store data using a special type of memory
called flash memory.
Flash Memory
You might wonder why, since flash memory is non-volatile, normal computers don’t use it
instead of RAM. If they did we would have computers that you could turn off, turn back on
again and no data would be lost. The reason is speed – saving data to flash memory is very
However some portable computers are starting to use flash memory (in the form of solid-
state ‘discs’ as a replacement for hard-drives. No moving parts mean less to go wrong and
longer battery life.
USB flash drives are typically removable and rewritable, much smaller than a floppy disk.
Storage capacities typically range from 64 MB to 64 GB. USB flash drives offer potential
advantages over other portable storage devices, particularly the floppy disk.
They have a more compact shape, operate faster, hold much more data, have a more
durable design, and operate more reliably due to their lack of moving parts. Flash drives are
widely used to transport files and backup data from computer to computer.
A memory card or flash memory card is a solid-state electronic flash memory data storage
device used with digital cameras, handheld and Mobile computers, telephones, music
players, video game consoles, and other electronics. Nowadays, most new PCs have built-in
slots for a variety of memory cards; Memory Stick, CompactFlash, SD, etc. Some digital
gadgets support more than one memory card to ensure compatibility.
Mobile phones contain a Subscriber Identity Module (SIM) card that contains the phone’s
number, the phonebook numbers, text messages, etc.
Smart Cards
Many credit cards (e.g. ‘chip-and-pin’ cards), door entry cards, satellite TV cards, etc. have
replaced the very limited storage of the magnetic strip (the dark strip on the back of older
cards) with flash memory. This is more reliable and has a much larger storage capacity.
Cards with flash memory are called smart cards.
Hot-swappable drives can be connected or disconnected without shutting down the system.
Some hard drive systems are hot swappable, depending, in large part, on the interface. Until
USB and FireWire were developed, hot-swappable drives were specially designed drive
systems used on servers—often with their own separate case containing two or more drives
and using the SCSI interface. They were expensive, but the cost was offset by the ability to
keep a server up and running after a single drive died. You may wonder what happened to
the data and programs held on the drive while all this occurred. The answer is that these
hard drive systems were not just hot-swappable, but they also were RAID arrays, which are
discussed in the following section of this chapter, “RAID Arrays.”
Hard drives with USB or FireWire interfaces are almost always hot-swappable, as is nearly
any USB or FireWire device. This doesn’t mean you can “pull the plug,” so to speak, any old
time. In order to avoid losing data, you need to ensure the disk is not in use before
disconnecting it. Close any applications or windows that may be using the drive, and then
take the steps necessary, depending on your operating system. In Windows XP and
Windows Vista, use the Safely Remove Hardware applet found in the status area on the
right end of the taskbar.
Labs
Hardware RAID can be set up by using a RAID controller that is part of the motherboard
BIOS or by using a RAID controller expansion card.
When installing hardware RAID system, for best performance, all hard drives in an array
should be identical in brand, size, speed, and other features. Also, if Windows is to be
installed on a hard drive that is part of a RAID array, RAID must be implemented before
Windows is installed. As with installing any hardware, first read the documentation that
comes with the motherboard or RAID controller and follow those specific directions rather
than the general guidelines given here. Make sure you understand which RAID
configurations the board supports.
2. Boot the system and enter BIOS setup. On the Advanced setup screen, verify the
three drives are recognized. Select the option to configure SATA and then select
RAID from the menu.
3. Reboot the system and a message is displayed on-screen: “Press Ctrl+I to enter the
RAID Configuration Utility.” Press Ctrl and I to enter the utility. Notice in the
4. Under RAID Level, select RAID5 (Parity). Because we are using RAID 5, which
requires three hard drives, the option to select the disks for the array is not available.
All three disks will be used in the array.
You are now ready to install Windows. Windows 7/Vista automatically “sees” the RAID array
as a single 500 GB hard drive because Windows 7/Vista has built-in hardware RAID drivers.
For Windows XP, when you begin the XP installation, you must press F6 at the beginning of
the installation to install RAID drivers. After Windows is installed on the drive, Windows will
call it drive C:.
Introduction
The function of a PC video display unit is to produce visual responses to user requests.
Often called simply a display or monitor, it receives computer output from the video adapter,
which controls its functioning. The monitor is a part of the computer video system and
monitors are available in many different types and size. The display technology must match
the technology of the video card to which it attaches. This is because a slow video adaptor
can slow down even the fastest and the most powerful PC. Incorrect monitor and video
adaptor combinations can also cause eyestrain or be unsuitable for the tasks you want to
accomplish.
Technicians must look at video as a subsystem that consists of the display, the electronic
circuits that send the display instructions, and the cable that connects them. The electronic
video circuits can be on a separate video adapter or built into the motherboard.
Video Modes
Very basic video modes are text and graphics. As a PC boots up, and before the operating
system takes control, the video is in text mode and can only display the limited ASCII
character set. Once the operating system is in control, drivers for the video adapter and
display load and a graphics mode that can display bitmapped graphics is used. Today there
are video modes that support millions of colours. There have been several video modes
since the introduction of the first IBM PC in 1981. However, we will limit the discussion to the
video graphics modes you can expect to encounter in business and homes today.
VGA: Video graphics array (VGA) is a video adapter standard introduced with the IBM PS/2
computers in the late 1980s. VGA uses analog signals to the display, rather than the digital
signals used by the two preceding video standards, CGA and EGA. The analog signal is
capable of producing a far wider range of colours than the previous digital signals.
VGA is old technology today, because we have gone far beyond this in capabilities, but
some software packages still list it as a minimum requirement for installing the software.
VGA has a maximum resolution (number of pixels) of 720× 400 in text mode and 640 ×
480 in graphics mode. The first number is the number of columns, while the second number
is the number of rows. VGA can produce around 16 million different colours but can display
only up to 256 different colours at a time. Another term for this colour setting is “8-bit high
colour.” VGA Mode most often consists of a combination of 640 × 480 pixels display
resolution and 16 colours.
Beyond VGA: In the past two decades, video standards have advanced nearly as fast as
CPU standards.
Super video graphics array (SVGA) is a term first used for any video adapter or monitor that
exceeded the VGA standard in resolution and colour depth. But improvements to SVGA’s
early 800 × 600–pixel resolution now come in various resolutions, including 1024 × 768,
1280 × 1024, and 1600 × 1200.
While SVGA also supports a palette of 16 million colours, the amount of video memory
present limits the number simultaneously displayed. This is also true of the newer video
standards. And so it goes; as fast as new standards are developed and adopted by
manufacturers, they are modified and improved upon. The best resolution and colour density
you will see on your display depends on the capabilities of both the video adapter and
display.
Video Modes
Classification of Monitors
Monitors are of different types. The thinner monitors used on notebook and other small
computer are known as flat-panel displays. Compared to CRT (Cathode Ray Tube) based
monitors, flat panel displays consume less electricity and take up much less room. Most flat
panel displays use Liquid Crystal Display (LCD) technology. LCD displays sandwich cells
containing tiny crystals between two transparent surfaces. By varying the electrical current
supplied to each crystal, an images forms.
Classification by Colour
Monitors are classified into three categories according to the display colour.
Monochrome monitors
Gray-scale monitors
Colour monitors
Monochrome: Monochrome monitors actually display two colours, one for the background
and another for foreground. The colour can be black (background) and white (foreground),
black (background) and green (foreground).
Colour: Colour monitors can display from 1 to 16 million different colours. These monitors
are sometimes called RGB (Red, Green, and Blue) monitors because three primary colours
Red, Green and Blue are used to make other colours. An RGB monitor consists of cathode
ray tube with three-electron guns-one each for Red, Green and Blue at one end and screen
at the other end.
Colour and gray-scale monitors are often classified by the number of colours they use to
represent each pixel. For example, a 14-bit monitor represents each pixel with 24 bits. The
more bits per pixel, the more colours and shades the monitor can display.
Displays are also categorized by the underlying technology that creates the image. Most of
these technologies can be used for either direct-view or projection displays, depending on
how they are implemented.
CRT
CRT, or cathode ray tube, is the traditional technology behind TVs and home theater
displays (for a long time, CRT was the only technology). CRTs can be used in both direct-
view and projector applications.
CRT, or Cathode Ray Tube, technology is the oldest technology among visual-display units.
A CRT monitor is bulky because of the large cathode ray tube it contains. A CRT uses an
electron gun to activate phosphors behind the screen. Each dot on the monitor, called a
pixel, has the ability to generate red, green, or blue, depending on the signals it receives.
This combination of colours results in the total display you see on the monitor.
CRT monitor are popular with desktop computers. They are available in monochrome and
colour. The CRT has display screen of 25 lines of 80 characters each.
Screen surfaces on CRT displays are curved; in general appearance, these monitors
resemble non flat screen television sets, and their screens feature a 4:3 ratio (square-
shaped) rather than wide screen format. CRTs use the most power of all displays (around
150W for a 17-inch monitor), cannot be used with laptops and have a subtle "flicker" in their
display, which causes eyestrain. However, CRT displays feature simple and solid
technology, can be seen from a wide viewing angle and are manufactured at significantly
lower cost than alternative display units CRT monitors are available in a wide array of colour
depths and resolutions.
A+ Exam Tip: The A+ 220-801 exam expects you to know how to properly dispose of a CRT
monitor.
LCD offers a higher resolution and better image contrast than CRT, and because LCD
technology doesn't flicker, it causes less eyestrain. These displays often come in widescreen
format as televisions, and both widescreen and 4:3 ratios as monitors. LCDs generate less
heat and consume as little as a third of the power of CRTs. However, LCDs feature a higher
price and a smaller viewing angle than CRT screens.
The first LCD panels, called passive matrix displays, consist of a grid of horizontal and
vertical wires. At one end of each wire is a transistor, which receives display signals from the
computer.
When two transistors (one at the x-axis and one at the y-axis) send voltage along their wires,
the pixel at the intersection of the two wires lights up.
Active matrix displays are newer and use different technology, called thin-film transistor
(TFT) technology. Active matrix displays contain transistors at each pixel, resulting in more
colours, better resolution, a better ability to display moving objects, and the ability to view it
at greater angles. However, active matrix displays use more power than passive matrix
displays.
Plasma displays were initially monochrome, typically orange, but colour displays have
become very popular and are used for home theater and computer monitors as well as
digital signs. The plasma technology is similar to the way neon signs work combined with the
red, green and blue phosphor technology of a CRT.
Plasma Advantages
Excellent (real) contrast ratios and black levels
Excellent color reproduction
Excellent life expectancy
Excellent viewing angle with no real loss of color or contrast
Plasma Disadvantages
Fairly heavy
Soon destined to be thicker than LCDs by a large margin, barring some practical
technical advances
Susceptible to screen burn-in (new models compensate with various screen-saving
methods)
Lower real peak brightness
Uses a lot of power compared to LCD
(Inorganic) Light Emitting Diodes (LED) are widely used as large-area video walls or displays
for tickers. These LED displays are commonly monochrome or multicolour and are
composed of commercially obtainable LEDs. Meanwhile high-efficiency blue LEDs are
available, making full-colour large-area LED displays possible.
LEDs exhibit high luminescence, high efficiency and long life time, which makes them
particularly attractive for outdoor use. However, LEDs are rather spacious. Therefore,
medium-sized displays for monitors or PDAs are not feasible with this technique. Monolithic
integration of LEDs on a single chip, however, can be used for virtual (monochrome)
displays.
Edge-lit LED TVs beam light to the back of the screen from the sides and allow for ultra-thin
cases. Although some designs provide selective dimming, they are not as effective as the
full array. In addition, they have a tendency to be brighter at the edges than at the center.
The direct-lit method uses LEDs in a sparse array across the back and is the least expensive
of the three methods. Since the LEDs are farther away from the screen to allow each light to
spread to more pixels, direct-lit TVs are as thick as the older CCFL fluorescent lamp LCD
TVs.
iv. OLED
The newest flat panel technology is known as Organic Light Emitting Diode, or OLED. OLED
displays are built of an organic material that is printed onto the display, and like plasma,
OLED creates its own light (so there’s no backlight). OLED displays are even thinner than
plasma or LCD displays and use less electricity than either. They also have extremely high
contrast ratios and extraordinary colour reproduction. OLED technology is still in its infancy
and is mainly used in very small displays for cell phones and similar devices.
ELDs have a very simple device structure and can entirely be built employing solid state thin
film technologies. Between two electrically conducting slabs (e. g. glass with structured ITO
stripes in matrix configuration) with applied insulating layers a thin electroluminescent layer
is deposited. Doped zinc sulfate ZnS, or strontium sulfate SrS with a rather broad emission
spectrum (“white”) are used as EL compounds.
Conventional colour filters generate RGB colours. With the EL layer being only about 100
µm thick, fully transparent displays, like for OLEDs, can be achieved. Typical driving
voltages are chosen around 200 V AC at up to 10 kHz which necessitates rather expensive
driver ICs. With an AM driving scheme (AMEL) employing a transistor matrix on a silicon
substrate, high-resolution microdisplays have been demonstrated.
There are several significant advantages to ELD's in the particular uses that they are
commonly put to - soft area lighting, instrument panel lighting, and to serve as the backlight
for liquid-crystal displays and other similar visual information displays.
ELD's use very little power when compared with most other common forms of artificial
lighting. They can serve in places where a light is desired to be on for very long periods of
time while minimizing consumption. Second, they do not generate heat, unlike incandescent
lights. ELD's can be made to glow in a variety of colors and form clear images, and their use
in illuminated signs has created a cheaper alternative to more traditional, power-hungry
forms of illumination. The low power consumption and lack of heat make ELD's excellent for
very many display purposes.
Microdisplays
Traditionally, projection systems used CRT tubes to create the projected image. Most
current projection systems use a microdisplay technology to do so instead.
A microdisplay is exactly what the name says it is — basically a tiny display that uses some
sort of miniaturized display technology to create an image that is then enlarged when it is
beamed onto the projection screen.
a) LCD
The same LCD technology used for flat-panel screens can be shrunken down and used in a
projector.
b) DLP
A DLP, or digital light processor, uses a special video chip from Texas Instruments that
includes millions of microscopic mirrors that are moved by computer command to create an
image. These mirrors are literally capable of switching on and off thousands of times per
second and are used to direct light towards, and away from, a dedicated pixel space. The
duration of the on/off timing determines the level of gray seen in the pixel. Current DMD
chips can produce up to 1024 shades of gray.
By integrating this grayscale capability with a 6 panel color wheel (2x RGB), the DLP system
is able to produce more than 16 million colors. A DMD system can be made up of a single
chip or 3 chips, resulting in even greater color reproduction. For example, DLP Cinema
systems can reproduce over 35 trillion colors.
DLP Advantages
Incredible color reproduction
Excellent contrast ratios (using the latest chips and color wheels)
Lightweight compared to CRT
Fully digital displays supporting DVI/HDMI without analogue conversion
Growing technology
DLP Disadvantages
Most sets require a minimum of 12-14" depth for rear-projection lamp-based
technology (InFocus, the 7" exception, had stopped producing their thin models as of
2005)
Potential for "Rainbow Effect" in single chip systems. (look for higher speed color
wheels and LED light sources to alleviate this)
Most of the newer "1080p" sets do not accept 1080p input via HDMI or component
video inputs
c) LCoS
Liquid Crystal on Silicon microdisplays use a special variant of LCD technology that has
shrunken down to the chip level. A rear-projection display technology, LCOS (or LCoS) is
similar to LCD and consists of a liquid crystal layer which sits on top of a pixelated, highly
reflective substrate. Below the substrate exists another layer containing the electronics to
activate the pixels. This assembly is combined into a panel and packaged for use in a
projection subsystem.
Display Resolution
As you learned earlier in this chapter, resolution is the displayable number of pixels. A CRT
display may easily support several different screen resolutions. If you have a CRT, you can
play with this setting, along with others, until you have the most comfortable combination of
resolution, colour density, and refresh rate. On a LCD display, you should keep this setting
at the native mode of the display.
Colour Quality
The Colour Quality setting in the Display applet allows you to adjust the number of colours
used by the display. Also called colour depth, it may be expressed in terms of 16 bits or 32
bits.
Projectors
Projectors can be used in place of a primary display or can be used as a clone of the primary
display to permit computer information and graphics to be displayed on a projection screen
or a wall.
Projectors use one of the following technologies:
• Liquid crystal display (LCD)
• Digital light processing (DLP)
LCD projectors use separate LCD panels for red, green, and blue light, and combine the
separate images into a single RGB image for projection, using dichroic mirrors. A dichroic
mirror reflects light in some wavelengths, while permitting light in other wavelengths to pass
through. In the figure below, red and blue dichroic mirrors are used to split the image into
red, blue, and green wavelengths. After passing through the appropriate LCD, a dichroic
combiner cube recombines the separate red, green, and blue images into a single RGB
image for projection.
LCD projectors use a relatively hot projection lamp, so LCD projectors include cooling fans
that run both during projector operation and after the projector is turned off to cool down the
lamp.
DLP projectors use a spinning wheel with red, green, and blue sections to add colour data to
light being reflected from an array of tiny mirrors known as a digital micromirror device
(DMD). Each mirror corresponds to a pixel, and the mirrors reflect light toward or away from
the projector optics. The spinning wheel might use only three segments (RGB), four
segments (RGB+clear), or six segments (RGB+RGB). More segments help improve picture
quality.
i. Monitor size: The most important aspect of a monitor is its size. Like televisions,
screen sizes are measured in diagonal inches, the distance from lower left corner to the
upper right corner diagonally. Typical monitors found in the market are of size 14
inches, and 17 inches and above. The size of display are determines monitor quality. In
larger monitors the objects look bigger or more objects can be fitted on the screen.
ii. Dot pitch: The screen image is composed of a number of pixel elements. A team pixel
is the smallest unit of display screen (the word comes from combination of picture and
elements). Each pixel is composed of three phosphor dots, Red, Green and Blue. Now
the Dot- pitch is the distance between the phosphor dots that make up a single pixel.
The Dot-pitch of colour monitors for PCs ranges from about 0.15 mm to 0.30 mm.
iii. Resolution: The maximum number of points that can be displayed without overlap on a
monitor screen is referred to as the resolution. The resolution of a monitor indicates how
densely the pixels (a single point in a graphic image) are packed. On colour monitors,
each pixel is actually composed of three dots – a Red, Green and Blue. Ideally the three
points should converge at the same point. Resolution indicates the quality of monitor.
Greater the number of pixels, better the resolution and sharper the image. Actually the
resolution is determined by the video controller or video adapter card.
iv. Aspect Ratio: The aspect ratio of a display is the proportion of the width to the height.
For instance, a traditional CRT monitor has a width to height aspect ratio of 4:3. LCD
panels come in the traditional 4:3 aspect ratio as well as in wider formats, of which the
most common is 16:9, which allows viewing of wide format movies. When viewing a
widescreen movie video on a 4:3 display, it shows in a letterbox, meaning that the
image size reduces until the entire width of the image fits on the screen. The remaining
portions of the screen are black, creating a box effect.
v. Refresh Rate: Refresh Rate is the number of times per second at which each pixel on a
screen is refreshed. Display monitors must be redrawn many times per second. The
refresh rate for a monitor is measured in hertz (HZ) or cycles/second. Generally
monitors refresh rates are 60 HZ or 70 HZ. The faster the refresh rate the less the
monitor flickers.
On an LCD, refresh rate works differently because LCDs use a completely different
technology to paint the screen. Instead of painting the screen at x times per second, the
liquid crystal material is illuminated. However, you can still modify the Windows refresh
rate, which effectively configures how many times per second a new image is received
from the video card. This is usually set to 60 Hz and is not configurable on most LCDs.
Flicker is not as much of an issue on LCDs because the backlight (lamp) is set to its
own rate, often at 200 Hz. Because refresh rate is not configurable on most LCDs, you
might not see this measurement in an LCDs specifications (aside from the newer 120
and 240 Hz models).
vi. Colour Depth: Colour depth (also known as bit depth or colour quality) is a term used
to describe the amount of bits that represent colour. For example, 1-bit colour is known
as monochrome, those old screens with a black background and one colour for the text.
But what is 1 bit? 1 bit in the binary numbering system means a binary number with one
digit; this can be a zero or a one, for a total of two values: usually black and white. This
is defined in scientific notation as 21, (2 to the 1st power equals 2). Another example
would be 4-bit colour, used by the ancient but awesome Commodore 64 computer. In a
4-bit colour system you can have 16 colours total. In this case 24 = 16. Of course 16
colours aren’t nearly enough for today’s applications; 16 bit, 24 bit, and 32 bit are the
most common colour depths used by Windows.
8-bit colour is used in VGA mode, which is uncommon for normal use, but you might see it if
you boot into Safe Mode, or other advanced modes that disable the normal video driver. 16-
bit is usually enough for the average user who works with basic applications; however, many
computers are configured by default to 24-bit or 32-bit (also known as 3 bytes and 4 bytes
respectively). Most users will not have a need for 32-bit colour depth; in fact, it uses up
resources that might be better put to work elsewhere. If the user works only on basic
applications, consider scaling them down to 24-bit or 16-bit to increase system performance.
However, gamers, graphics artists, and other designers probably want 32-bit colour depth.
When selecting a monitor or projector for use with a particular video card or integrated video
port, it’s helpful to understand the physical and feature differences between different video
connector types, such as VGA, DVI, HDMI, Component/RGB, S-video, and composite.
These had earlier been discussed in chapter 6 – Connector types and cables.
Exam Alert: Be able to identify DVI, VGA, HDMI, S-Video, component video, and
DisplayPort ports for the exam.
Multiple Displays
Multiple Displays / Monitor (also referred to as DualView) is a Windows feature that allows
you to either duplicate the display onto other monitors, or extend the desktop across to
multiple displays.
In the latter case it enables you to spread applications over two or more monitors that
effectively work together as one. This works well for applications that are wide, or if a user
Multiple Monitor do not have a taskbar; they just have the wallpaper or background that was
selected. It is possible to select any of the monitors connected to the computer as the
primary monitor, meaning the one with the Start button, taskbar, and so on. You need to
connect an additional monitor to one of the extra video ports on the computer.
Microsoft calls its multimonitor feature Dual View. You have the option to extend your
Desktop onto a second monitor or to clone your Desktop on the second monitor. You can
use one graphics adapter with multiple monitor interfaces or multiple adapters. However, as
of Vista, Windows Display Driver Model (WDDM) version 1.0 required that the same driver
be used for all adapters. This doesn’t mean that you cannot use two adapters that fit into
different expansion slot types, such as PCIe and AGP. It just means that both cards have to
use the same driver. Incidentally, laptops that support external monitors use the same driver
for the external interface as for the internal LCD attachment. Version 1.1, introduced with
Windows 7, relaxed this requirement. WDDM is a graphics-driver architecture that provides
enhanced graphics functionality that was not available before Windows Vista, such as
virtualized video memory, preemptive task scheduling, and sharing of Direct3D surfaces
among processes.
Newer video cards have DVI ports and/or HDMI ports. But you can still connect older SVGA
monitors; just use a DVI to VGA adapter. Then you need to open the Multiple Monitor
configuration window.
On a PC, up to ten monitors can be used with the Multiple Monitor feature. On a laptop you
are often limited to two (also known as DualView.)
To have two monitors connected to a single computer, you have several methods of
configuration:
Use the two video ports on the motherboard (not common).
Use the integrated motherboard port and buy a video card with one video port. (This
is the cheapest solution, but the motherboard might disable the integrated video port
automatically.)
Buy a video card that has two video ports (best option).
Buy two video cards. (Usually the motherboard has one expansion slot for a video
card, and that means using an older and slower technology expansion slot for the
second video card.)
Once Windows recognizes the second monitor (two monitors appear in Device Manager), in
Windows XP, use the Appearance and Themes > Display Control Panel > Settings tab >
click Identify so that a large number appears on each monitor. Click the monitor icons and
drag them on the screen to arrange the display.
In Windows 7 use the Appearance and Personalization Control Panel > select the Adjust
screen resolution link > locate the Multiple displays drop down menu to see the desktop
options.
Note that if the Multiple displays option is not shown, Windows does not recognize the
second monitor. Troubleshoot hardware connectivity, monitor input mode if the monitor has
multiple input ports, and verify system BIOS options.
It is possible to use multiple video cards, but keep in mind that Windows 7/Vista prefers
identical video cards and drivers. Windows XP might work with different cards/drivers, but it
is not recommended. Some applications (for example video players) might not work perfectly
Labs
Modifying Display Settings (Win XP)
1. Open the Display applet by right-clicking on the desktop and selecting Properties
from the context menu.
2. Select the Settings tab. Notice the current Screen resolution.
3. Click Advanced.
4. In the Advanced dialog box, select the Monitor tab.
5. On the Monitor page notice the screen refresh rate and write down the value. Is it
greater than 60 Hertz? If not, first ensure that the box labeled “Hide modes that this
monitor cannot display” is checked, and then use the dropdown box. If a setting is
available that is higher than 60 Hertz, select it. Click OK on this page and on the
Settings page.
6. Do the settings work? Is there a noticeable difference?
7. If necessary, return the display to its original settings.
Do not confuse refresh rate with the number of images (frames) per second. A
traditional movie projector runs at 32 frames per second. A human eye can detect
flicker in a movie if it runs at a rate close to or less than 20 frames per second.
NB: Selecting a refresh rate higher than the CRT can support can cause damage to the
display.
To change the settings for multiple monitors in Windows 7, again perform up to step 2, and
then follow the steps 3 onwards after ensuring that you have a second monitor attached.
Steps
1. Right-click on a blank portion of the Desktop.
2. Click Screen Resolution.
3. Click on the picture of the monitor with the number 2 on it.
5. Click Keep Changes in the pop-up dialog that appears before the 15-second timer
expires.
6. Click and drag the second monitor to the desired virtual position around the primary
monitor. This affects the direction you drag objects from one display to the other.
7. While the second monitor is still selected, change its refresh rate and resolution, if
necessary, as outlined previously. Note that if you would like for your Desktop to
appear on the second monitor, you can check the Make This My Main Display box.
8. Click OK to save your changes and exit.
Not all computers are right for every situation. There are small netbooks that are ideal for
portability but that would fail miserably when used for mathematical modeling of complex
systems. Supercomputers that are up to the modeling task would have to be completely
disassembled to be transported anywhere. While these are extreme examples, dozens more
exist that shine the light on the need for custom configurations to perform specific jobs.
Workstations used in the design of graphical content place a heavy load on three primary
areas of the system:
CPU enhancements
Video enhancements
Maximized RAM
CPU Enhancements
Sometimes it’s a matter of how powerful a computer’s CPU is. Other times, having multiple
lesser CPUs that can work independently on a number of separate tasks is more important.
Many of today’s PCs have either of these characteristics or a combination of both.
Nevertheless, there are enough computers being produced that have neither. As a result, it
is necessary to gauge the purpose of the machine when choosing the CPU profile of a
computer.
CAD/CAM Workstations
CAD/CAM systems can carry the designer’s vision from conception to design in a 100
percent digital setting. Three-dimensional drawings are also common in this technology.
These designs drive or aid in the production of 3D models. Software used for such projects
The output of computerized numerical control (CNC) systems used in the manufacturing
process following the use of CAD/CAM workstations in the design phase is far different from
displays on monitors or printouts. CNC systems take a set of coded instructions and render
them into machine or tool movements. The result is often a programmable cutting away of
parts of the raw material to produce a finished product. Examples are automotive parts, such
as metallic engine parts or wheel rims, crowns and other dental structures, and works of art
from various materials.
Video Enhancements
Possibly an obvious requirement for such systems, graphics adapters with better graphics
processing units (GPUs) and additional RAM on board have the capability to keep up with
the demand of graphic design applications. Such applications place an unacceptable load on
the CPU and system RAM when specialized processors and adequate RAM are not present
on the graphics adapter.
Maximized RAM
Although such systems take advantage of enhanced video subsystems, all applications still
require CPUs to process their instructions and RAM to hold these instructions during
processing. Graphics applications tend to be particularly CPU and RAM hungry. Maximizing
the amount of RAM that can be accessed by the CPU and operating system will result in
better overall performance by graphic design workstations.
Professionals that edit multimedia material require workstations that excel in three areas:
Video enhancements
Specialized audio
Specialized drives
The following sections assume the use of nonlinear editing (NLE) schemes for video. NLE
differs from linear editing by storing the video to be edited on a local drive instead of editing
being performed in real time as the source video is fed into the computer. NLE requires
workstations with much higher RAM capacity and disk space than does linear editing.
Although maximizing RAM is a benefit to these systems, doing so is considered secondary
to the three areas of enhancement mentioned in the preceding list.
Video Enhancements
Although a high-performance video subsystem is a benefit for computer systems used by
audio/video (A/V) editors, it is not the most important video enhancement for such systems.
Audio/video editing workstations benefit most from a graphics adapter with multiple video
interfaces that can be used simultaneously. These adapters are not rare, but it is still
possible to find high-end adapters with only one interface, which are not ideal for A/V editing
systems.
When editing multimedia content or even generalized documents, it is imperative that the
editor have multiple views of the same or similar files. The editor of such material often
needs to view different parts of the same file. Additionally, many A/V editing software suites
allow, and often encourage or require, the editor to use multiple utilities simultaneously. For
example, in video editing, many packages optimize their desktop arrangement when multiple
To improve video-editing performance, insist on a graphics adapter that supports CUDA and
OpenCL. CUDA is Nvidia’s Compute Unified Device Architecture, a parallel computing
architecture for breaking down larger processing tasks into smaller tasks and processing
them simultaneously on a GPU. Open Computing Language (OpenCL) is a similar, yet
cross-platform, open standard. Programmers can specify high-level function calls in a
programming language they are more familiar with instead of writing specific instructions for
the microcode of the processor at hand. The overall performance increase of macro-style
application programming interfaces (APIs) like these is an advantage of the technologies as
well. The rendering of 2D and 3D graphics occurs much more quickly and fluidly with one of
these technologies. CUDA is optimized for Nvidia GPUs, while OpenCL is less specific,
more universal, and perhaps, as a result, less ideal when used with the same Nvidia GPUs
that CUDA supports.
Furthermore, depending on the visual quality of the content being edited, the professional’s
workstation might require a graphics adapter and monitor capable of higher resolutions than
are readily available in the consumer marketplace. If the accuracy of what the editor sees on
the monitor must be as true to life as possible, a specialty CRT monitor might be the best
choice for the project. Such CRTs are expensive and are available in high definition and
widescreen formats. These monitors might well provide the best color representation when
compared to other high-quality monitors available today.
Specialized Audio
The most basic audio controllers in today’s computer systems are not very different from
those in the original sound cards from the 1980s. They still use an analog codec with a
simple two-channel arrangement. Editors of audio information who are expected to perform
quality work often require six to eight channels of audio. Many of today’s motherboards
come equipped with 5.1 or 7.1 analog audio. (See the section “Analog Sound Jacks” in
Chapter 3, “Peripherals and Expansion.”) Although analog audio is not entirely incompatible
with quality work, digital audio is preferred the vast majority of the time. In some cases, an
add-on adapter supporting such audio might be required to support an A/V editing
workstation.
Specialized Drives
Graphics editing workstations and other systems running drive-intensive NLE software
benefit from uncoupling the drive that contains the operating system and applications from
the one that houses the media files. This greatly reduces the need for multitasking by a
single drive. With the data drive as the input source for video encoding, consider using the
system drive as an output destination during the encoding if a third drive is not available.
Just remember to move the resulting files to the data drive once the encoding is complete.
Not only should you use separate drives for system and data files, you should also make
sure the data drive is large and fast. SATA 6Gbps drives that spin at 7200rpm and faster are
recommended for these applications. Editors cannot afford delays and the non-real-time
video playback caused by buffering due to inefficient hard-drive subsystems. If you decide to
use an external hard drive, whether for convenience or portability or because of the fact that
an extremely powerful laptop is being used as an A/V editing workstation, use an eSATA
connection when possible. Doing so ensures no loss in performance over internal SATA
drives due to conversion delays or slower interfaces, such as USB 2.0.
If you cannot find a drive that has the capacity you require, you should consider
implementing RAID 0, disk striping without parity. Doing so has two advantages: You can
pay less for the total space you end up with, and RAID 0 improves read and write speeds
If you would also like to add fault tolerance and prevent data loss, go with RAID 5, which has
much of the read/write benefit of RAID 0 with the assurance that losing a single drive won’t
result in data loss. RAID should be implemented in hardware when possible to avoid
overtaxing the operating system, which has to implement or help implement software RAID
itself.
Virtualization Workstations
Hardware virtualization has taken the industry by storm and has given rise to entire
companies and large business units in existing companies that provide software and
algorithms of varying effectiveness for the purpose of minimizing the hardware footprint
required to implement multiple servers and workstations. Although virtualization as a
technology subculture is discussed in greater detail later in this book, you are ready to
investigate the unique requirements for the workstation that will host the guest operating
systems and their applications.
Depending on the specific guest systems and processes that the workstation will host, it may
be necessary to increase the hard drive capacity of the workstation as well. Because this is
only a possibility, increased drive capacity is not considered a primary enhancement for
virtualization workstations.
Virtual machines (VMs) running on a host system appear to come along with their own
resources. A quick look in the Device Manager utility of a guest operating system leads you
to believe it has its own components and does not require nor interfere with any resources
on the host. This is not true, however. The following list includes some of the more important
components that are shared by the host and all guest operating systems:
CPU cycles
System memory
Drive storage space
System wide network bandwidth
CPU Enhancements
Because the physical host’s processor is shared by all operating systems running, virtual or
not, it befits you to implement virtual machines on a host with as many CPUs as possible.
The operating system is capable of treating each core in a multicore processor separately
and creating virtual CPUs for the VMs from them. Therefore, the more CPUs you can install
in a workstation, each with as many cores as possible, the more dedicated CPU cycles that
can be assigned to each virtual machine.
Maximized RAM
As you create a virtual machine, even before a guest operating system is installed in the VM,
you must decide how much RAM the guest system will require. The same minimum
The RAM you dedicate to that VM is not used until the VM is booted. Once it is booted,
though, that RAM is as good as unavailable to the host operating system. As a result, you
must ensure that the virtualization workstation is equipped with enough RAM to handle its
own needs as well as those of all guests that could run simultaneously. As with a
conventional system running a single operating system at a time, you generally want to
supply each VM with additional RAM to keep it humming along nicely.
This cumulative RAM must be accounted for in the physical configuration of the virtualization
workstation. In most cases, this will result in maximizing the amount of RAM installed in the
computer. The maximum installed RAM hinges on three primary constraints:
The CPU’s address-bus width
The operating system’s maximum supported RAM
The motherboard’s maximum supported RAM
The smallest of these constraints dictates the maximum RAM you will be able to use in the
workstation. Attention to each of these limitations should be exercised in the selection of the
workstation to be used to host guest operating systems and their applications. Considering
the limitations of operating systems leads to preferring the use of server versions over client
versions and the use of x64 versions over x86 versions.
The technician in charge did almost everything right. He chose the company’s most powerful
server and created five virtual machines. The hard drive was large enough that there was
plenty of room for the host operating system and the five VMs. The technician knew the
minimum requirements for running each of the operating systems and made sure that each
VM was configured with plenty of RAM. The dual-core CPU installed in the system was more
than powerful enough to handle each operating system.
After a combination of clean installations and image transfers into the VMs, the server was
ready to test. The host booted and ran beautifully as always. The first VM was started and
was found to be accessible over the network. It served client requests and created a barely
noticeable draw on performance. It was the second VM that sparked the realization that the
manager and technician missed a crucial point. The processor and the RAM settings for
each individual VM were sufficient for the host and at most one VM, but when any second
VM was added to the mix, the combined drain on the CPU and RAM was untenable. “What’s
it going to take to be able to run these servers simultaneously?” the technician wondered.
The solution was to replace the server motherboard with a model containing dual quad-core
Xeon processors and to maximize the RAM based on what the motherboard supported. The
result was an impressive system with five virtual servers, each of which displayed impressive
performance statistics. Before long, the expense of the server was returned in power
savings. Eventually, additional savings will be realized when the original physical hardware
for the five servers would have had to be replaced.
Early video games designed for the PC market were designed to run on the average system
available at the time. As is true with all software, there is a push/pull relationship between
PC-based games and the hardware they run on. Over time, the hardware improves and
challenges the producers of gaming software. Inspired by the possibilities, the programmers
push the limits of the hardware, encouraging hardware engineers to create more room for
software growth.
Today’s PC-based gaming software cannot be expected to run on the average system of the
day. Specialized gaming PCs, computers optimized for running modern video games, fill a
niche in the marketplace, leading to a continually growing segment of the personal-computer
market.
Gaming enthusiasts often turn to specialized game consoles for the best performance, but
with the right specifications, a personal computer can give modern consoles a run for their
money, possibly even eclipsing their performance. For a computer to have a chance,
however, four areas of enhancement must be considered:
CPU enhancements
Video enhancements
Specialized audio
Enhanced cooling
CPU Enhancements
Unlike with A/V editing, gaming requires millions of decisions to be made by the CPU every
second. It’s not enough that the graphics subsystem can keep up with the action; the CPU
must be able to create that action. Some gamers find that they do fine with a high-end stock
CPU. Others require that such CPUs perform above their specified rating. They find that
overclocking the CPU by making changes in the BIOS to the clock frequency used by the
system gains them the requisite performance that allows them to remain competitive against
or to dominate competitors. Overclocking means that you are running your CPU at a clock
speed greater than the manufacturer’s rating to increase performance.
However, this increased performance comes at a price: Their CPU will almost certainly not
live as long as if they had used the default maximum speed determined by the manufacturer
and detected by the BIOS. Nothing can completely negate the internal damage caused by
pushing electrons through the processor’s cores faster than should be allowed.
Nevertheless, the CPU would scarcely survive days or even hours with standard cooling
techniques. Enhancing the cooling system, discussed shortly, is the key to stretching the
CPU’s life back out to a duration that approaches its original expectancy.
Video Enhancements
Video games have evolved from text-based and simple two-dimensional graphics-based
applications into highly complex software that requires everything from real-time high-
resolution, high-definition rendering to three-dimensional modeling. Technologies like
Nvidia’s SLI (Scalable Link Interface) and ATI’s Crossfire are extremely beneficial for such
graphics-intensive applications.
No longer can gaming software rely mostly on the system’s CPU to process its code and
deliver the end result to the graphics adapter for output to the monitor. No longer can this
software store a screen or two at a time in the graphics adapter’s memory, allowing for
adapters with tens of MB of RAM. Today’s gaming applications are resource-hungry
powerhouses capable of displaying fluid video at 30 frames per second. To keep up with
such demands, the RAM installed on graphics adapters has breached the 1GB mark, a
Of course, all the internal system enhancements in the world are for naught if the monitor
you choose cannot keep up with the speed of the adapter or its resolutions and 3D
capability. Quite a bit of care must be exercised when comparison shopping for an adequate
gaming monitor.
Specialized Audio
Today’s video games continue the genre of interactive multimedia spectacles. Not only can
your video work in both directions, using cameras to record the image or motion of the
player, but so can your audio. It’s exceedingly common to find a gamer shouting into a
microphone boom on a headset as they guide their character through the virtual world of
high-definition video and high-definition digital audio. A lesser audio controller is not
acceptable in today’s PC gaming world. Technologies such as S/PDIF and HDMI produce
high-quality digital audio for the gaming enthusiast. Of course, HDMI provides for state-of-
the-art digital video as well.
Enhanced Cooling
As mentioned earlier, the practices of speed junkies, such as modern PC gamers, can lead
to a processor’s early demise. Although an earlier end to an overclocked CPU can’t be
totally guaranteed, operators of such systems use standard and experimental cooling
methods to reduce the self-destructive effects of the increased heat output from the CPU.
Today’s high-end graphics adapters come equipped with their own cooling mechanisms
designed to keep such adapters properly cooled under even extreme circumstances.
Nevertheless, the use of high-end adapters in advanced ways leads to additional concerns.
Graphics adapters that rob a second slot for their cooling mechanism to have space to
exhaust heated air through the backplane might be unwelcome in a smaller chassis that
have only a single slot to spare.
Also, the gaming-PC builder’s election to include two or more ganged adapters in one
system (SLI or Crossfire) challenges the engineered cooling circuit. When many large
adapters are placed in the path of cooler air brought in through one end of the chassis for
the purpose of replacing the warmer internal air of the chassis, the overall ambient internal
temperature increases.
The average PC can be turned into a device with similar functionality, but a computer
designed for use as such should be built on a chassis that adheres to the HTPC form factor;
the average computer would not. In fact, the following list comprises the specializations
inherent in true HTPCs:
Video Enhancements
High-definition monitors are as commonplace as television displays in the home today.
HTPCs, then, must go a step beyond, or at least not fall a step behind. Because High-
Definition Multimedia Interface (HDMI) is an established standard that is capable of the
highest-quality audio, video resolution, and video refresh rates offered by consumer
electronics and because HDMI has been adopted by nearly all manufacturers, it is the logical
choice for connectivity in the HTPC market. Considering the single simple, small-form factor
plug and interface inherent to HDMI, more cumbersome video-only choices, such as DVI
and YPbPr component video, lose favor on a number of fronts.
Graphics adapters present in HTPCs should have one or more HDMI interfaces. Ideally, the
adapter will have both input and output HDMI interfaces, giving the PC the capability to
combine and mix signals as well as interconnect sources and output devices. Additionally,
internally streamed video will be presented over the HDMI interface to the monitor. Keep in
mind that the monitor used should be state-of-the-art to keep up with the output capabilities
of the HTPC.
Specialized Audio
Recall that HDMI is capable of eight-channel 7.1 surround sound, which is ideal for the home
theater. The fact that the HTPC should be equipped with HDMI interfaces means that
surround-sound audio is almost an afterthought. Nevertheless, high-end digital audio should
be near the top of the wish list for HTPC specifications. If it’s not attained through HDMI,
then copper or optical S/PDIF should be employed. At the very least, the HTPC should be
equipped with 7.1 analog surround sound (characterized by a sound card with a full
complement of six 3.5mm stereo minijacks).
Creating a machine that takes up minimal space (perhaps even capable of being mounted
on a wall beside or behind the monitor) without compromising storage capacity and
performance requires the use of today’s smallest components. The following list comprises
some of the components you might use when building your own HTPC from separate parts.
HTPC chassis, typically with dimensions such as 17×17×7″ and 150W HTPC power
supply
Motherboard, typically mini-ITX (6.7×6.7″) with integrated HDMI video
HDD or SSD, usually 2½″ portable form factor, larger capacity if storage of
multimedia content is likely
RAM—DIMMs for mini-ITX motherboard; SODIMMs for many pre-built models
Blu-ray drive, player minimum
PCIe or USB TV tuner card, optionally with capture feature
Many prebuilt offerings exist with all components standard. You need only choose the model
with the specifications that match your needs. Barebones systems exist as well, allowing you
to provide your own hard drive and RAM modules. Many such units contain smaller ITX
A standard thick client is not so much a custom configuration but instead the standard
configuration that allows the definition of custom configurations. In other words, a thick client
is a standard client computer system, and as such, it must meet only the basic standards
that any system running a particular operating system and particular applications must meet.
Because it’s a client, however, the ability to attach to a network and accept a configuration
that attaches it to one or more servers is implied. Although most computers today exhibit
such capabilities, they cannot be assumed.
Each operating system requires a minimum set of hardware features to support its
installation. Each additional desktop application installed requires its own set of features
concurrent with or on top of those required for the operating system. For example, the
operating system requires a certain amount of RAM for its installation and a certain amount
of hard drive space. A typical application might be able to run with the same amount of RAM
but will most certainly require enough additional hard-drive space to store its related files.
Keep in mind that minimum specifications are just that, the minimum. Better performance is
realized by using recommended specifications or higher.
Thin Clients
The implication of having clients with low processing and storage capabilities is that there
must be one or more servers with increased corresponding capacities. Unfortunately, this
leads to a single or centralized point of failure in the infrastructure that can impact
productivity to an unacceptable level. Thin clients have no offline capability, requiring
constant network connectivity. Workforces that require employees to be independent or
mobile with their computing power lean away from thin clients as well, opting for laptops and
similar technology.
Thin clients with local storage and basic applications for local execution must be able to
accommodate the storage and processing of such applications and the operating systems
for which they are designed, including full versions of Windows. Simple designs featuring
flash-based storage and small quantities of small-form-factor RAM exist, reducing the need
for such systems to resemble thick clients after all.
Essentially powerful client systems with standard, nonserver operating systems, home
server PCs differ from enterprise servers to the point that they qualify as custom
configurations. For many generations, desktop operating systems have run server services
and have been capable of allowing limited access by other clients but not enough access to
accommodate enterprise networks. Nevertheless, because the home server PC is the center
of the home network, fault tolerance considerations should be entertained, which is
decidedly not the case for standard home systems.
Recall that fault tolerance differs from redundancy in that fault tolerance seeks to retain
accessibility during the failure while redundancy simply ensures recoverability after the
failure. Redundancy, in the form of a data backup, does not ensure the continued
accessibility of the data, but RAID, for example, seeks to do so. Even the most basic home
system should be backed up regularly to avoid total data loss, but only servers in the home
environment should be considered for the added expense of fault tolerance.
Home server PCs can be built from the same components that go into today’s higher-
performance systems. Attention needs to be paid to certain enhanced features, however.
The following list outlines these differences:
Media streaming capabilities
File sharing services
Print sharing services
Gigabit NIC
RAID array
You can prepare a Windows 7 computer to stream media through Windows Media Player by
accessing the media streaming configuration through the advanced settings of the Network
and Sharing Center in Control Panel. The following exercise walks you through the process.
Lab
Configuring Windows 7 for Media Streaming
1. In Control Panel, run the Network and Sharing Center applet in classic view.
2. Click the Change Advanced Sharing Settings link in the left frame.
3. Click the down arrow to the right of Home or Work to expand that configuration
section.
4. In the Media Streaming section, click the Choose Media Streaming Options link.
5. In the Media Streaming Options dialog, pull down the buttons labeled Blocked and
change them to Allowed for each computer on the network that you want to be able
to stream from the local PC.
6. Click OK to leave the Media Streaming Options dialog and then close the Network
and Sharing Center dialog.
The difference between home servers and enterprise servers is that all clients in a home
environment tend to have equal access to the file server’s data store. Enterprise file servers
have data stores that are isolated from users that do not have permission to access them.
Print servers in the home and enterprise behave in a similar fashion. Each printer attached
to the home server should be accessible to anyone on the home network.
File and print sharing are available through classic file sharing in Windows as well as
through the Windows 7 HomeGroup.
Gigabit NIC
The home server should be attached to a wired switched port in an Ethernet switch or in the
wireless access point. The NIC and the switch port should be capable of gigabit speeds.
Providing such speed ensures that clients attached to 100Mbps Fast Ethernet ports and
across the wireless network will not create a bottleneck in their attempt to share the server’s
resources. Running client NICs at gigabit speeds should be avoided, even though the
capability is ubiquitous. Running all devices on the network at such speeds guarantees that
each device so attached will attempt to saturate the server’s gigabit interface with its own
traffic.
RAID Array
Because some of the data stored on a home server represents the only copy, such as data
that is streamed to all clients or the data included in a crucial backup of client systems, it
must be protected from accidental loss. Because the data that comprises the streaming
content, shared data store, and client backup sets can become quite expansive, a large
capacity of storage is desirable. Even a recoverable server outage results in a home network
that is temporarily unusable by any client, so fault tolerance should be included. RAID
provides the answer to all of these needs.
By using a hardware RAID solution in the home server PC, the server’s operating system is
not taxed with the arduous task of managing the array, and additional RAID levels might also
be available. The RAID array can extend to many terabytes in size, many times the size of a
single drive, and should include hot-swappable drives so that it can be rebuilt on the fly while
still servicing client requests during the loss of a single drive.
NETWORKING
Objectives
Introduction
Network types
Network topologies
Network concepts and Technologies
Networking Cables and Connectors
Cabling Tools
Networking Devices
Network Roles
Network Protocols
TCP/IP Model
SOHO Networks
Introduction
A computer network is a group of computer systems and other computing hardware devices
that are linked together through communication channels to facilitate communication and
resource-sharing among a wide range of users. Network connections between computers
are typically created using cables (wires). However, connections can be created using radio
signals (wireless / Wi-Fi), telephone lines (and modems) or even, for very long distances,
via satellite links.
Network types
Network Topologies
Network topology is the interconnected pattern of network elements. There are many
identified topologies but they are not strict, which means that any of them can be combined.
However, each topology has a different standard and may use different hardware methods
so they are not interchangeable.
Physical topology refers to the interconnected structure of a local area network (LAN). The
method employed to connect the physical devices on the network with the cables, and the
type of cabling used, all constitute the physical topology. This contrasts with logical topology,
which describes a network's media signal performance and how it exchanges device data.
Bus Topology
Bus networks use a common backbone to connect all devices. A single cable, the backbone,
functions as a shared communication medium that devices attach or tap into with an
interface connector. A device wanting to communicate with another device on the network
sends a broadcast message onto the wire that all other devices see, but only the intended
recipient actually accepts and processes the message.
Ethernet bus topologies are relatively easy to install and don't require much cabling
compared to the alternatives. 10Base-2 ("ThinNet") and 10Base-5 ("ThickNet") both were
popular Ethernet cabling options many years ago for bus topologies. However, bus networks
Ring Topology
In a ring network, every device has exactly two neighbors for communication purposes. All
messages travel through a ring in the same direction (either "clockwise" or
"counterclockwise"). A failure in any cable or device breaks the loop and can take down the
entire network.
To implement a ring network, one typically uses FDDI, SONET, or Token Ring technology.
Ring topologies are found in some office buildings or school campuses.
Star Topology
Many home networks use the star topology. A star network features a central connection
point called a "hub node" that may be a network hub, switch or router. Devices typically
connect to the hub with Unshielded Twisted Pair (UTP) Ethernet.
Compared to the bus topology, a star network generally requires more cable, but a failure in
any star network cable will only take down one computer's network access and not the entire
LAN. (If the hub fails, however, the entire network also fails.)
Mesh Topology
Mesh topologies involve the concept of routes. Unlike each of the previous topologies,
messages sent on a mesh network can take any of several possible paths from source to
destination. (Recall that even in a ring, although two cable paths exist, messages can only
travel in one direction.) Some WANs, most notably the Internet, employ mesh routing.
A mesh network in which every device connects to every other is called a full mesh. As
shown in the illustration below, partial mesh networks also exist in which some devices
connect only indirectly to others.
Hybrid Topology
A hybrid is a combination of any two or more network topologies. Instances can occur where
two basic network topologies, when connected together, can still retain the basic network
character, and therefore not be a hybrid network. For example, a tree network connected to
a tree network is still a tree network. Therefore, a hybrid network accrues only when two
basic networks are connected and the resulting network topology fails to meet one of the
basic topology definitions. For example, two star networks connected together exhibit hybrid
network topologies. A hybrid topology always accrues when two different basic network
topologies are connected.
There are specific technologies designed for each of the geographic network types.
Manufacturers select these technologies for their capabilities that match the distance and
sometimes the security needs of the network type.
PAN Networks
A PAN network may use a wired connection, such as the old PC serial interface, or USB or
FireWire, or it may communicate wirelessly using one of the standards developed for short-
range communications, such as IrDA (an implementation of infrared wireless) or Bluetooth.
LAN Networks
While there are many LAN technologies, the two most widely used are Ethernet and several
standards that fall under the Wi-Fi heading.
a) Ethernet
Most wired LANs use hardware based on the IEEE-developed Ethernet standard that
defines, among other things, how computer data is broken down into small chunks,
prepared, and packaged before the Ethernet network interface card (NIC) places it on the
Ethernet network. This chunk of data is an Ethernet frame.
Also known as IEEE 802.3, an Ethernet network uses the carrier sense multiple
access/collision detection network access method. In this access method, when a computer
wants to send a data packet, it first “listens” to the network to determine if another
transmission is already in progress. If there is another transmission, the computer waits, and
then listens again. When there is no other network activity, the computer sends its data
packet. However, if another computer sends a packet at the same time, a collision occurs.
When a collision is detected, both computers stop transmitting, wait a random amount of
time, and then begin the listening/transmitting process again. This procedure continues until
the data transmits properly. These collisions are limited to a single network segment, the
portion of a network between bridges.
On networks with many computers, the number of collisions can be quite high, so it can take
a number of tries until a computer can send its packet. The collision problem can be dealt
with by breaking a network into more segments (by adding more bridges) with fewer nodes
per segment.
When a network medium allows for communications in both directions, it is bidirectional.
Ethernet is half-duplex, meaning that while data can travel in either direction, it can only
travel in one direction at a time. Depending on the exact implementation, Ethernet supports
Fast Ethernet: Using the same cabling as Ethernet, Fast Ethernet, or 100Base-T,
operates at 100 Mbps and uses different network interface cards, many of which are
also capable of the lower speeds of Ethernet, auto-detecting the speed of the
network and working at whichever speed is in use.
Gigabit Ethernet: Supporting data transfer rates of 1 Gbps over UTP, Gigabit
Ethernet is also called 1000Base-T. This standard supports speeds up to 10 Gbps
but is normally implemented at 1 Gbps. Faster standards have also evolved using
fiber optics. One that supports 5 Gbps is often used between specialized network
equipment, rather than for connecting computers to the network.
In many homes, Wi-Fi networks give users access to the Internet. In these instances, the
wireless communications network uses a wireless router connected to a broadband
connection, such as a cable modem or DSL modem, as shown in the figure below. In this
figure, one computer connects via Wi-Fi, while another computer connects directly to the
router.
Laptops now come standard with Wi-Fi, and many public places, restaurants, and other
businesses offer free or for-pay access to Wi-Fi networks connected to broadband Internet
When considering using a wireless network, speed and range can be nebulous at best. In
spite of the stated speeds just listed, many factors affect both speed and range. First, there
is the limit of the standard, and then there is the distance between the wireless-enabled
computers and the wireless access point (WAP), a network connection device at the core of
a wireless network. Finally, there is the issue of interference, which can result from other
wireless device signals operating in the same band or from physical barriers to the signals.
WLAN standards have progressed over the years in response to a growing need for stronger
security and because of some problems in the earliest WLAN security standard.
Static Preshared Keys (PSK): The key value had to be configured on each client and
each AP, with no dynamic way to exchange the keys without human intervention. As
a result, many people did not bother to change the keys on a regular basis,
especially in Enterprises with a large number of wireless clients.
Because of the problems with WEP, and the fact that the later standards include much better
security features, WEP should not be used today
The use of a dynamic key exchange process helps because the clients and AP can then
change keys more often, without human intervention. As a result, if the key is discovered,
the exposure can be short-lived. Also, when key information is exchanged dynamically, a
new key can be delivered for each packet, allowing encryption to use a different key each
time. That way, even if an attacker managed to discover a key used for a particular packet,
he or she could decrypt only that one packet, minimizing the exposure.
Cisco created several features based on the then-to-date known progress on the IEEE
802.11i WLAN security standard. However, Cisco also added user authentication to its suite
of security features. User authentication means that instead of authenticating the device by
checking to see if the device knows a correct key, the user must supply a username and
password. This extra authentication step adds another layer of security. That way, even if
the keys are temporarily compromised, the attacker must also know a person’s username
and password to gain access to the WLAN.
WPA essentially performed the same functions as the Cisco proprietary interim solution, but
with different details. WPA includes the option to use dynamic key exchange, using the
Temporal Key Integrity Protocol (TKIP). (Cisco used a proprietary version of TKIP.) WPA
allows for the use of either IEEE 802.1X user authentication or simple device authentication
using preshared keys. And the encryption algorithm uses the Message Integrity Check (MIC)
algorithm, again similar to the process used in the Cisco-proprietary solution.
WPA had two great benefits. First, it improved security greatly compared to WEP. Second,
the Wi-Fi Alliance’s certification program had already enjoyed great success when WPA
MAN Networks
WAN Networks
WANs, which traditionally used phone lines or satellite communications, now also use
cellular telecommunications and cable networks. WAN speeds range from thousands of bits
per second up into the low millions of bits per second. At the low end today are 56 Kbps
modems (56,000 bits per second).
At the high end of WAN speeds are parts of the Internet backbone, the connecting
infrastructure of the Internet, which runs in the hundreds of millions of bits per second and
faster.
The speed of your communications on any network is a function of the speed of the slowest
pathway between you and the servers you are accessing. The weakest link determines your
speed.
A+ Exam Tip: The A+ 220-802 exam expects you to be able to establish a dial-up
connection.
b) Broadband WAN
WAN connections that exceed the speed of a typical dial-up connection come under the
heading of broadband connections. Broadband speeds are available over cellular, ISDN,
DSL, cable, and satellite. WAN connections can connect private networks to the Internet,
and to each other. These connections are “always on,” meaning that you do not have to
initiate the connection every time you wish to access resources on the connected network,
as you do with dial-up. If you wish to browse the Web, you simply open your Web browser.
i. Cellular
Cellular Internet data connections, also referred to as wireless WAN (WWAN), vary in
speed from less than dial-up speeds of 28.8 Kbps, to a range of broadband speeds.
Because the trend in cellular is to provide faster -than- dial-up speeds, it is included under
WWAN services are typically delivered to smart phones and other handheld devices sold by
cellular service providers and their retail partners but other mobile devices can use them as
well. Some netbooks are available with WWAN cards installed; you can also purchase
wireless WAN cards to install yourself. Unlike Wi-Fi cards, which can be used in just about
any hotspot, WWAN devices must be provisioned specifically for access to your service
provider's network. Your service provider will take care of billing for roaming access that
involves other provider networks.
The three major wireless WAN technologies comprise the two traditional cellular systems,
GSM and CDMA, and the newer WiMAX. GSM and CDMA use HSPA(High Speed Packet
Access and EV-DO to deliver 3G data rates. WiMAX delivers faster data service.
Laptops are available with built-in cellular devices, and if you need to share a cellular
Internet connect, another new product is a cell transceiver that converts and routes the cell
signal through an integrated LAN hub, which can be either a wired Ethernet hub or a Wi-Fi
access point.
ii. ISDN:
The Integrated Service Digital Network (ISDN) was an early international standard for
sending voice and data over digital telephone wires.Nowadays newer technologies, such as
DSL and cable, largely replace it.
ISDN uses existing telephone circuits or higher-speed conditioned lines to get speeds of
either 64 Kbps or 128 Kbps. ISDN lines also have the ability to carry voice and data
simultaneously over the circuit. In fact, the most common ISDN service, called Basic Rate
Interface (BRI), includes three channels—two 64 Kbps channels, called B-channels, that
carry the voice or data communications, and one 16 Kbps D-channel that carries control and
iii. DSL
Digital subscriber line (DSL) uses existing copper telephone wire for the communications
circuit. A DSL modem splits the existing phone line into two bands to accomplish this; voice
transmission uses the frequency below 4000 Hz, while data transmission uses everything
else. The figure below shows the total bandwidth being separated into two channels; one is
used for voice, the other for data.
Voice communications operate normally, and the data connection is always on and
available. DSL service is available through phone companies, which offer a variety of DSL
services usually identified by a letter preceding “DSL,” as in ADSL, CDSL, HDSL, SDSL,
SHDSL, and many more. Therefore, when talking about DSL in general the term xDSL is
often used.
Some of these services such as ADSL (asymmetrical digital subscriber line), offer
asymmetric service, in that the download speed is higher than the upload speed. The top
speeds can range from 1.5 Mbps to 6 Mbps for download and between 16 Kbps and 640
Kbps for upload. However, CDSL (consumer DSL) service aims at the casual home user
and offers lower speeds than this range. CDSL service is limited to download speeds of up
to 1 Mbps, and upload speeds of up to 160 Kbps. Other, more expensive, services aimed at
business offer much higher rates. Symmetric DSL (SDSL), high–data rate DSL (HDSL), and
symmetric HDSL (SHDHL) all offer matching upload and download speeds.
iv. Cable
Cable television service has been around for four decades. Most cable providers have
added Internet connection services with promised higher speeds of up to 30 Mbps, which is
three times the practical maximum for the typical DSL service. These speeds appear to be
more promise than fact. Whereas DSL service is point-to-point from the client to the ISP, a
cable client shares the network with their neighboring cable clients. It is like sharing a LAN
that in turn has an Internet connection.
For this reason, speed degrades as more people share the local cable network. You still get
impressive speeds with cable, depending on the level of service you buy. Cable networks
use coaxial cable to connect a special cable modem to the network. The PC normally
connects to this cable modem with twisted pair cable that, in turn, is connected to the PC’s
Ethernet network adapter.
The figure below shows a cable modem installation for a home that also has cable TV
service.
For instance, a T1 circuit provides full-duplex transmissions at 1.544 Mbps, carrying digital
voice, data, or video signals. A complete T1 circuit provides point-to-point connections, with
a channel service unit (CSU) at both ends. On the customer side a T1 multiplexer or a
special LAN bridge, referred to as the customer premises equipment (CPE) connects to the
CSU. The CSU receives data from the CPE and encodes it for transmission on the T1
circuit. T1 is just one of several levels of T-carrier services offered by telephone companies
over the telephone network.
vi. Satellite
Satellite communications systems have come a long way over the last several years.
Satellite communication systems initially allowed extensive communications with remote
locations and for military purposes. These systems usually use microwave radio frequencies
and require a dish antenna, a receiver, and a transmitter. Early satellite communications
systems were very expensive to maintain and operate.
The bandwidth capabilities of these systems rival those of cable or DSL networks and offer
speeds for downloading of up to 2 Mbps (uploading speeds typically range from 40 to 90
Kbps). Satellite connections are now available for both fixed and mobile applications.
Satellite providers offer different levels of service.
The highest speeds require a larger dish antenna. The authors currently use a 0.74-meter
dish (larger than modern TV dish antennas) with a special digital receiver/transmitter
referred to as a modem. This dish and modem combination gives a certain range of speeds,
and larger and more expensive dishes provide greater speeds. As with TV satellite service,
you must have a place to mount the dish antenna with a clear view of the southern sky.
Using a plan designed for homes and small business, the download speed is 700 Kbps and
upload speed is 128 Kbps. The next two levels up offer speeds of 1000 Kbps down and 200
Kbps up.
Cable is the medium through which information usually moves from one network device to
another. There are several types of cable which are commonly used with LANs. In some
cases, a network will utilize only one type of cable, other networks will use a variety of cable
types.
The type of cable chosen for a network is related to the network's topology, protocol, and
size. Understanding the characteristics of different types of cable and how they relate to
other aspects of a network is necessary for the development of a successful network.
Link Lights
All NICs made today have some type of light-emitting diode (LED) status indicator that gives
information about the state of the NIC’s link to whatever’s on the other end of the connection.
Even though you know the lights are actually LEDs, get used to calling them link lights,
because that’s the term all network techs use. NICs can have between one and four different
link lights, and the LEDs can be any color. These lights give you clues about what’s
happening with the link and are one of the first items to check whenever you think a system
is disconnected from the network.
Exam Tip: Although no real standard exists for NIC LEDs, CompTIA will test you on some
more or less de facto LED meanings. You should know that a solid green light means
connectivity, a flashing green light means intermittent connectivity, no green light means no
connectivity, and a flashing amber light means there are collisions on the network (which is
sometimes okay). Also, know that the first things you should check when having connectivity
issues are your NIC’s LEDs.
Twisted Pair
A cable made by intertwining two separate insulated wires. There are two twisted pair
types: shielded and unshielded. A Shielded Twisted Pair (STP) has a fine wire mesh
surrounding the wires to protect the transmission; an Unshielded Twisted Pair (UTP) do not.
Shielded cable is used in older telephone networks, as well as network and data
communications to reduce outside interference.
Twisted pair cabling comes in two varieties: shielded and unshielded. Unshielded twisted
pair (UTP) is the most popular and is generally the best option for school networks
The quality of UTP may vary from telephone-grade wire to extremely high-speed cable. The
cable has four pairs of wires inside the jacket. Each pair is twisted with a different number of
twists per inch to help eliminate interference from adjacent pairs and other electrical devices.
The tighter the twisting, the higher the supported transmission rate and the greater the cost
per foot. The EIA/TIA (Electronic Industry Association/Telecommunication Industry
Association) has established standards of UTP and rated six categories of wire (additional
categories are emerging).
Exercise
Research on UTP Cat 6a and 7
RJ-45 Connector
Exam Tip: Know the 10xBaseT numbers and names—10-, 100-, and 1000-Mbps data
transfers; 100-meter segment length; RJ-45 connectors; and UTP cabling.
RJ-11 connector:
This is a 4-wire or 6-wire telephone-type connector that connects telephones to wall plates.
RJ-11 supports up to six wires, but usually only four are used with the two-pair twisted-pair
cabling commonly found in telephone cabling.
Wire Schemes
There are two different patterns, or wiring schemes, called T568A and T568B. Each wiring
scheme defines the pinout, or order of wire connections, on the end of the cable. The two
schemes are similar except that two of the four pairs are reversed in the termination order.
On a network installation, one of the two wiring schemes (T568A or T568B) should be
chosen and followed. It is important that the same wiring scheme is used for every
termination in that project. If working on an existing network, use the wiring scheme that
already exists.
Using the T568A and T568B wiring schemes, two types of cables can be created: a straight-
through cable and a crossover cable. These two types of cable are found in data
installations.
Straight-Through Cables
A straight-through cable is the most common cable type. It maps a wire to the same pins on
both ends of the cable. In other words, if T568A is on one end of the cable, T568A is also on
the other. If T568B is on one end of the cable, T568B is on the other. This means that the
order of connections (the pinout) for each color is the exact same on both ends.
Two devices directly connected and using different pins for transmit and receive are known
as unlike devices. They require a straight-through cable to exchange data. There are two
unlike devices that require a straight-through cable, a switch port to router port and a hub
port to PC.
Crossover Cable
A crossover cable uses both wiring schemes, a T568A on one end of the cable and T568B
on the other end of the same cable. This means that the order of connection on one end of
the cable does not match the order of connections on the other.
Devices that are directly connected and use the same pins for transmit and receive are
known as like devices. They require the use of a crossover cable to exchange data. Like
devices that require a crossover cable include the following:
Switch port to switch port
Switch port to hub port
Hub port to hub port
Router port to router port
PC to router port
PC to PC
If the incorrect cable type is used, the connection between network devices will not function.
Some devices can automatically sense which pins are used for transmit and receive and will
adjust their internal connections accordingly.
Lab 1
Required equipment
1. Cat5, Cat5e, Cat6, or Cat7 cable - This cabling can be purchased in large spindles at
stores that specialize in cabling. Cat5 cabling is the most commonly used cable used
today for networks
2. RJ-45 connectors - These connectors can be purchased at most electronic stores
and computer stores and usually come in bulk packages. It is always a good idea to
get more than you think you need.
3. Crimping tool - These tools are often purchased at electronic stores such as radio
shack. To create a network cable you need a crimper that is capable of crimping a
RJ-45 cable (not just a RJ-11 cable, which looks similar to a RJ-45).
4. Wire stripper or Knife - If you plan on making several network cables you should also
consider getting a wire stripper cable of stripping Cat5, Cat6, or your cable of choice.
If you do not plan on creating many network cables a knife will suffice. For simplicity
and to prevent potential issues we recommend a wire stripper.
Once you have the necessary equipment needed to create your own network cables you
need to determine the network cable you want to create. There are two major network
cables: a straight through cable and a crossover cable.
Once you have determined the type of network cable strip the cable. Strip at least a half of
an inch off of the cable to expose the inner wires. Don't be worried about stripping too much
of the network cable jacket off since you can always cut the wires down more if needed later.
After the network cable jacket has been removed separate the wires within the cable so they
can be put into the RJ-45 connector.
The CAT5 twisted-pair cables consist of four twisted wires, each color coded; one a solid
color and the other a stripped color. As seen below, most network cables consist of a green,
blue, orange, and brown pair of cables.
There are two cable standards T568A and T568B, each twisted-pair must be broken apart to
create the layout as shown above. If you want to create a straight through cable both ends of
the cable should be identical and should match the T568A example shown above. If you
want to create a crossover cable one end of the cable should match T568A and the other
should match T568B.
Coaxial Cable
Coaxial cabling has a single copper conductor at its center. A plastic layer provides
insulation between the center conductor and a braided metal shield. The metal shield helps
to block any outside interference from fluorescent lights, motors, and other computers.
Coaxial cable
The two types of coaxial cabling are thick coaxial and thin coaxial.
Thin coaxial cable is also referred to as thinnet. 10Base2 refers to the specifications
for thin coaxial cable carrying Ethernet signals. The 2 refers to the approximate
maximum segment length being 200 meters. In actual fact the maximum segment
length is 185 meters. Thin coaxial cable has been popular in school networks,
especially linear bus networks.
Thick coaxial cable is also referred to as thicknet. 10Base5 refers to the
specifications for thick coaxial cable carrying Ethernet signals. The 5 refers to the
maximum segment length being 500 meters. Thick coaxial cable has an extra
protective plastic cover that helps keep moisture away from the center conductor.
This makes thick coaxial a great choice when running longer lengths in a linear bus
network. One disadvantage of thick coaxial is that it does not bend easily and is
difficult to install.
The three most commonly used coaxial cable types for video applications are RG59/U,
RG6/U and RG11/U.
RG59/U is available with either solid copper or copper-clad-steel centre conductor.
It's suitable for basic analog TV antenna feeds in residential applications and for
basic CCTV systems over short cable runs. The copper-clad-steel type has high
tensile strength and should be used when terminating the cable with F-Type
connectors.
RG11/U Quad-shield is used for the same applications as RG6/U for either
backbone cabling or for long distribution runs. It features a copper-clad-steel inner
conductor.
Exam Tip: The only two types of coaxial cable you need to know for the CompTIA A+
exams are RG-59 and RG-6. Both of these coax cables are used by your cable television,
but RG-59 is thinner and doesn’t carry data quite as far as RG-6.
BNC
The most common type of connector used with coaxial cables is the Bayone-Neill-
Concelman (BNC) connector. Different types of adapters are available for BNC connectors,
including a T-connector, barrel connector, and terminator. Connectors on the cable are the
weakest points in any network. To help avoid problems with your network, always use the
BNC connectors that crimp, rather screw, onto the cable.
BNC Connector
F-Connector
F-Type connectors are screw on connections used for attaching coaxial cable to devices. In
the world of modern networking, F-Type connectors are most commonly associated with
connecting Internet modems to cable or satellite Internet provider's equipment. However,
they are also used for connecting to some proprietary peripherals.
F-Type connectors have a 'nut' on the connection that provides something to grip as the
connection is tightened by hand. If necessary, this nut can be also be lightly gripped with
pliers to aid disconnection. Figure 7 shows an example of an F-Type connector.
Fiber optic cabling consists of a center glass core surrounded by several layers of protective
materials. It transmits light rather than electronic signals eliminating the problem of electrical
interference. This makes it ideal for certain environments that contain a large amount of
electrical interference.
It has also made it the standard for connecting networks between buildings, due to its
immunity to the effects of moisture and lighting.
Fiber optic cable has the ability to transmit signals over much longer distances than coaxial
and twisted pair. It also has the capability to carry information at vastly greater speeds. This
capacity broadens communication possibilities to include services such as video
conferencing and interactive services. The cost of fiber optic cabling is comparable to copper
cabling; however, it is more difficult to install and modify. 10BaseF refers to the
specifications for fiber optic cable carrying Ethernet signals.
The center core of fiber cables is made from glass or plastic fibers. A plastic coating then
cushions the fiber center, and kevlar fibers help to strengthen the cables and prevent
breakage. The outer insulating jacket is made of teflon or PVC.
Fiber-optic data transmission requires two cables—one to send and another to receive. Over
the years, the various standards for connectors have continued to evolve, moving toward
ii) Subscriber connector (SC): This connector is a square snap coupling, about 2.5
mm, used for cable-to-cable connections or to connect cable to network devices. It
latches with a push-pull action similar to audio and video jacks.
iii) Lucent connector (LC): This type (also called local connector), also has a snap
coupling and, at 1.25 mm, is half the size of the SC connector.
iv) Mechanical Transfer Registered Jack (MT-RJ): This connector type resembles an
RJ45 network connector and is less expensive and easier to work with than ST or
SC.
Now that you’re familiar with the different types of cables used in networking, let’s take a
minute to examine a few tools you can use to troubleshoot your cables or your general
connectivity.
Crimper
A crimper is a very handy tool for helping you put connectors on the end of a cable. Many
crimpers will be a combination tool that strips and snips wires as well as crimps the
connector on to the end.
A UTP crimper
Multimeter
Multimeters are very versatile electronic measuring tools. A multimeter can measure voltage,
current, and resistance on a wire. There are a wide variety of types and qualities on the
market, everywhere, from economical $10 versions to ones that cost several thousand
dollars.
A multimeter
A Toner Probe
Cable Tester
Cable testers are indispensable tools for any network technician. Usually you would use a
cable tester before you install a cable to make sure it works. Of course, you can also test
them after they’ve been run as well. A decent cable tester will tell you the type of cable, and
more elaborate models will have connectors for multiple types of cables.
Network/Cable Tester/Probe
Loopback Plug
Of the devices listed in Objective 2.10, this is the only one that doesn’t specifically test
cables. Considering that it’s part of the objectives though, it makes as much sense to cover it
here as it would anywhere else.
A loopback plug is for testing the ability of a network adapter to send and receive. The plug
gets plugged into the NIC, and then a loopback test is performed using troubleshooting
software. You can then tell if the card is working properly or not.
Punch-Down Tool
Last but not least is the punch-down tool. It’s not a testing tool but one that allows you to
connect (i.e., punch down) the exposed ends of a wire into a wiring harnesses, such as a
110 block (used many times in connectivity closets to help simplify the tangled mess of
cables). Below is the tool and its changeable bit.
A punch-down tool
Networking Devices
Most LANs now connect to other LANs, or through WAN connections to internetworks, such
as the Internet. A variety of network connection devices connect networks. Each serves a
special purpose, and two or more of these functions may be contained in a single box.
These are Routers, Bridges, Repeaters and Gateways.
PCI
RJ-45 Port
Bridges are specific to the hardware technology in use. For instance, an Ethernet bridge
looks at physical Ethernet addresses and forwards Ethernet frames with destination
addresses that are not on the local network.
Below is an illustration of how a bridge works.
Hub/Switch
A hub is a device that is the central connecting point of a LAN. It is little more than a
multiport repeater because it takes a signal received on one port and repeats it on all other
More intelligent devices called switches or switching hubs, which take an incoming signal
and only send it to the destination port, have replaced both types of hubs. This type of switch
is both a bridge and a hub. At one time switches were very expensive, but now small eight-
port switches are inexpensive and commonly used—even in very small LANs.
40-Port Switch
Each computer or other device in a network attaches to a hub of the type appropriate for the
type of LAN. For instance, computers using Ethernet cards must connect to an Ethernet hub.
Wireless devices attach wirelessly to a wireless hub—more often called a wireless access
point (WAP). Devices may combine these functions, as in the case of a WAP that includes
an Ethernet hub (look for one or more RJ45 connectors on a WAP).
However, if the switch is POE-enabled, only the network connection needs to be made, as it
will receive its electrical power from this cable as well.
Hence, a POE switch is a network switch that has Power over Ethernet injection built-in.
You simply connect other network devices to the switch as normal and the switch will detect
whether they are POE-compatible and enable power automatically.
POE switches are available to suit all applications, from low-cost unmanaged edge switches
with a few ports, up to complex multi-port rack-mounted units with sophisticated
management.
A midspan (or POE injector) is used to add POE capability to regular non-POE network
links. Midspans can be used to upgrade existing LAN installations to POE, and provide a
versatile solution where fewer POE ports are required. Upgrading each network connection
to POE is as simple as patching it through the midspan, and as with POE switches, power
injection is controlled and automatic. Midspans are available as multi-port rack-mounted
units or low-cost single-port injectors.
In order to reach a computer on another network, the originating computer must have a
means of sending information to the other computer. To accomplish this, routes are
established and a router—a device that sits at the connection between networks—is used to
store information about destinations. The portion of network between each bridge is a
segment.
A router is specific to one protocol. The type of router used to connect TCP/IP networks is an
IP router. An IP router knows the IP addresses of the networks to which it connects, and the
addresses of other routers on those networks. A router knows the next destination to which it
can transfer information.
Many routers include bridging circuitry, a hub, and the necessary hardware to connect
multiple network technologies together, such as a LAN and a T1 network, or a LAN to any of
the other broadband networks. The Internet has thousands of routers managing the
connections between the millions of computers and networks connected to it.
Gateways
The term gateway is applied to any device, system, or software application that can perform
the function of translating data from one format to another. The key feature of a gateway is
that it converts the format of the data, not the data itself.
You can use a gateway in many ways. For example, a router that can route data from an IPX
network to an IP network is, technically, a gateway. The same can be said of a translational
bridge that converts from an Ethernet network to a Token Ring network and back again.
Software gateways can be found everywhere. Many companies use an email system such
as Microsoft Exchange or Novell GroupWise. These systems transmit mail internally in a
certain format. When email needs to be sent across the Internet to users using a different
email system, the email must be converted to another format, usually to Simple Mail
Transfer Protocol (SMTP). This conversion process is performed by a software gateway.
Another good (and often used) example of a gateway involves the Systems Network
Architecture (SNA) gateway, which converts the data format used on a PC to that used on
an IBM mainframe or minicomputer. A system that acts as an SNA gateway sits between the
client PC and the mainframe and translates requests and replies from both directions. On
the downside, gateways slow the flow of data and can therefore potentially become
bottlenecks.
An SNA
Gateway
MSAU
A Multistation Access Unit is a token-ring network device that physically connects network
computers in a star topology while retaining the logical ring structure. One of the problems
with the token-ring topology is that a single non-operating node can break the ring. The MAU
solves this problem because it has the ability to short out non-operating nodes and maintain
the ring structure.
A Multistation access unit (MSAU) is used in place of the hub that is used on an Ethernet
network. The MSAU performs the token circulation inside the device, giving the network a
physical star appearance. Each MSAU has a Ring In (RI) port on the device, which is
connected to the Ring Out (RO) port on another MSAU. The last MSAU in the ring is then
connected to the first to complete the ring. Because Token Ring networks are few
nowadays, it is far more likely that you will find yourself working with Ethernet hubs and
switches.
You can think of an ISDN terminal adapter as a kind of digital modem. (Remember that a
modem converts a signal from digital to analog and vice versa. An ISDN terminal adapter
translates the signal between two digital formats.)
NAS allows more hard disk storage space to be added to a network that already utilizes
servers without shutting them down for maintenance and upgrades. With a NAS device,
storage is not an integral part of the server. Instead, in this storage-centric design, the server
still handles all of the processing of data but a NAS device delivers the data to the user.
A NAS device does not need to be located within the server but can exist anywhere in a LAN
and can be made up of multiple networked NAS devices.
CSUs/DSUs
A Channel Service Unit/Data Service Unit (CSU/DSU) acts as a translator between the LAN
data format and the WAN data format. Such a conversion is necessary because the
technologies used on WAN links are different from those used on LANs. Some consider a
CSU/DSU as a type of digital modem; but unlike a normal modem, which changes the signal
from digital to analog, a CSU/DSU changes the signal from one digital format to another.
A CSU/DSU has physical connections for the LAN equipment, normally via a serial interface,
and another connection for a WAN. Traditionally, the CSU/DSU has been in a separate box
from other networking equipment; however, the increasing use of WAN links means that
some router manufacturers are now including the CSU/DSU functionality in routers or are
providing the expansion capability to do so.
Network Roles
You can describe a network by the types of roles played by the computers on the network.
The two general computer roles in a network are as clients, computers requesting services,
and servers, computers providing services.
Peer-to-Peer Networks
In a peer-to-peer network all computer systems in the network may play both roles—client
and server. They have equal capabilities and responsibilities; each computer user is
responsible for controlling access, sharing resources, and storing data on their computer. All
of the computers essentially operate as both servers (providing access to shared resources)
and clients (accessing those shared resources). In a typical peer-to-peer environment, each
of the computers can be sharing their files, and the computer connected to the printer can
share the printer.
A typical peer-to-peer network is very small, with users working at each computer. Peer-to-
peer works best in a very small LAN environment with fewer than a dozen computers and
users. A small business office may have a peer-to-peer network. Microsoft’s term for a peer-
to-peer network is a workgroup. Each Microsoft workgroup must have a unique name, as
must each computer.
Client/Server-Based Networks
A client/server-based network uses dedicated computers called servers to store data,
provide print services, or other capabilities. Servers are generally more powerful computer
systems with more capacity than a typical workstation. Client/server based models also
allow for centralized administration and security. These types of networks are also
considerably more scalable in that they can grow very large without adding additional
administrative complexity to the network. A large private internetwork for a globe-spanning
corporation is an example of a client/server-based network. The network administrator can
establish a single model for security, access, and file sharing when configuring the network.
This configuration may remain unchanged as the network grows, but if changes are needed,
they can be done from a central point by the administrator, who does not need to go to each
computer on the network to make administrative changes. The term for a Microsoft
client/server network is a domain. The domain must have a unique name, and each client or
server computer must have a unique name.
Exam Tip: Know the difference between client/server and peer-to-peer networks.
Every computer network consists of physical and logical components controlled by software.
Protocols describe the rules for how hardware and software work and interact. They include
guidelines that regulate the following characteristics of a network: access method, allowed
physical topologies, types of cabling, and speed of data transfer.
A Protocol is a predefined set of rules that dictates how network devices (such as router,
computer, or switch) communicate and exchange data on the network. Protocol
specifications define the format of the messages that are exchanged. A letter sent through
the postal system also uses protocols. Part of the protocol specifies where the delivery
address on the envelope needs to be written. If the delivery address is written in the wrong
place, the letter cannot be delivered.
Timing is crucial for the reliable delivery of packets. Protocols require messages to arrive
within certain time intervals so that computers do not wait indefinitely for messages that
might have been lost. Systems maintain one or more timers during the transmission of data.
Protocols also initiate alternative actions if the network does not meet the timing rules.
Example, Ethernet is a standard for the physical components, such as cabling and network
adapters, as well as for the software that controls the hardware, such as the ROM BIOS in
the network adapters and device drivers, which allow the network adapters to be controlled
from the operating system.
Exam Tip: The CompTIA exams require that A+ candidates understand three protocols
suites: TCP/IP, NetBEUI, and IPX/SPX. Each of these involves several protocols, and other
software components, and each is sometimes called a “protocol stack.” In recent years,
TCP/IP has largely replaced the other two protocols for use on most computer networks.
However, you may encounter NetBEUI or IPX/SPX in some organizations that continue to
use these protocols
IPX/SPX
IPX/SPX stands for Internet Packet Exchange/Sequenced Exchange. It is the protocol suite
of early Novell networks and contains these two core protocols along with several supporting
ones. Novell designed IPX/SPX specifically for the Novell NetWare network operating
system. It is routable and otherwise similar to TCP/IP, except that it has limited cross-
platform support and cannot be used on the Internet. Beginning with NetWare version 5.0,
Novell moved to TCP/IP, although they continue to support the old protocol stack for
networks with mixed NetWare versions. The Microsoft implementation of IPX/SPX is
NWLink, used mainly to communicate with the older NetWare servers in a Microsoft network.
NetBEUI
NetBEUI (NetBIOS Extended User Interface) is usable only in small networks. It requires
no address configuration and provides faster data transfer than TCP/IP, but it cannot be
routed—the main reason it is limited to small networks. An early Microsoft LAN protocol,
NetBEUI only requires a computer name and a workgroup name for each computer on the
network. There is no notion of a computer or network address.
People often confuse NetBEUI with NetBIOS, but NetBIOS is not a network protocol suite. It
is a single protocol for managing names on a network. In a Windows network, you can use
NetBIOS names and the protocol with any of the protocol suites. However, NetBIOS naming
has limited value in modern networks, and the Internet-style names of the DNS protocol
(which requires TCP/IP) have replaced it.
TCP/IP
Transmission Control Protocol/Internet Protocol (TCP/IP) is by far the most common protocol
suite on both internal LANs and public networks. It is the protocol suite of the Internet.
TCP/IP requires more configuration than the other two protocol suites mentioned here, but it
is the most robust, is usable on very large networks, and is routable.
The term routable refers to the ability to send data to other networks. At the junction of each
network is a router that uses special router protocols to send each packet on its way toward
its destination. There are several protocols in the TCP/IP suite; the two main protocols are
the
Transmission Control Protocol (TCP) and the Internet Protocol (IP). There are many sub-
protocols, such as UDP, ARP, ICMP, and IGMP.
TCP/IP allows for cross-platform communication. That means that computers using different
OSs (such as Windows and Unix) can send data back and forth, as long as they are both
using TCP/IP. The following briefly describes the two cornerstone protocols of the TCP/IP
suite.
When receiving data from a network, the Transmission Control Protocol (TCP) uses the
information in this header to reassemble the data. If TCP is able to reassemble the
message, it sends an acknowledgment (ACK) message to the sending address. The
sender can then discard datagrams that it is saving while waiting for an acknowledgment.
If pieces are missing, TCP sends a non-acknowledgment (NAK) message back to the
sending address, whereupon TCP will resend the missing pieces.
TCP/IP has become the de facto protocol of most networks. Far from the only networking
protocol available, TCP/IP meets the needs of most organizations and is becoming more
and more the one protocol suite that administrators must understand in order to do their
jobs.
IP Class
IPv4 addresses are 32-bit binary numbers. Because numbers of such magnitude are difficult
to work with, they’re divided into four octets (8 bits) and converted to decimal. Thus,
01010101 becomes 85. This is important because the limits on the size of the decimal
number are due to the reality that they’re representations of binary numbers. The range must
be from 0 (00000000) to 255 (11111111) per octet, making the lowest possible IP address
0.0.0.0 and the highest 255.255.255.255. Many IP addresses aren’t available because
they’re reserved for diagnostic purposes, private addressing, or some other function.
Three classes of IP addresses are available for assignment to hosts; they’re identified by the
first octet. The table below shows the class and the range the first octet must fall into to be
within that class.
You have undoubtedly noticed a couple of networks missing from this. In the class A sized
network 127.0.0.0 is reserved for localhost and loopback testing. Addresses 224.0.0.0-up
are class D multicast and class E experimental addresses. Others are reserved for private IP
space.
Class A
If you’re given a Class A address, then you’re assigned a number such as 125. With a few
exceptions, this means you can use any number between 0 and 255 in the second field, any
number between 0 and 255 in the third field, and any number between 0 and 255 in the
fourth field. This gives you a total number of hosts that you can have on your network in
excess of 16 million. The default subnet mask is 255.0.0.0.
Class B
If you’re given a Class B address, then you’re assigned a number such as 152.119. With a
few exceptions, this means you can use any number between 0 and 255 in the third field and
any number between 0 and 255 in the fourth field. This gives you a total number of hosts
that you can have on your network in excess of 65,000. The default subnet mask is
255.255.0.0.
The class, therefore, makes a tremendous difference in the number of hosts your network
can have. In most cases, the odds of having all hosts at one location are small. Assuming
you have a Class B address, will there be 65,000 hosts in one room, or will they be in
several locations? Most often, it’s the latter.
IPv6 offers a number of improvements, the most notable of which is its ability to handle
growth in public networks. IPv6 uses a 128-bit addressing scheme, allowing a huge number
of possible addresses: 340,282,366,920,938,463,463,374,607,431,768,211,456. In IPv6
addresses, repeating zeros can be left out so that colons next to each other in the address
indicate one or more sets of zeros for that section.
Class Range
A 10.0.0.0 to 10.255.255.255
B 172.16.0.0 to 172.31.255.255
C 192.168.0.0 to 192.168.255.255
A computer on the Internet is identified by its IP address. In order to avoid address conflicts,
IP addresses are publicly registered with the Network Information Centre (NIC).
Automatic Private IP Addressing (APIPA) is a TCP/IP feature Microsoft added to their
operating systems. If a DHCP server cannot be found and the clients are configured to
obtain IP addresses automatically, the clients automatically assign themselves an IP
address, somewhat randomly, in the 169.254.x.x range with a subnet mask of 255.255.0.0.
In Windows 7, use the following path to configure a DHCP automatically assigned IPv4
address:
Start > Control Panel > Network and Sharing Center > Change adapter setting > right-
click Local Area Connection > click Properties > TCP/IPv4 > Properties > select radio
button Obtain an IP address automatically > click OK > OK
Client-side DNS
On a TCP/IP network, every computer, interface, or device is issued a unique identifier
known as an IP address that resembles 192.168.12.123. Because of the Internet, TCP/IP is
the most commonly used networking protocol today. You can easily see that it’s difficult for
most users to memorize these numbers, so hostnames are used in their place. Hostnames
are alphanumeric values assigned to a host; any host may have more than one hostname.
For example, the host 192.168.12.123 may be known to all users as Gemini, or it may be
known to the sales department as Gemini and to the marketing department as Apollo9. All
that is needed is a means by which the alphanumeric name can be translated into its IP
address. There are a number of methods of doing so, an example being DNS. On a large
network, you can add a server to be referenced by all hosts for the name resolution. The
server runs DNS and resolves fully qualified domain names (FQDNs) into their IP address.
Multiple DNS servers can serve an area and provide fault tolerance for one another. In all
cases, the DNS servers divide their area into zones; every zone has a primary server and
any number of secondary servers. DNS, like hosts files, works with any operating system
and any version.
DHCP
Dynamic Host Configuration Protocol (DHCP) falls into a different category. Whereas DNS
resolves names to IP addresses, DHCP issues IP configuration data. Rather than an
administrator having to configure a unique IP address for every host added on a network
(and default gateway and subnet mask), they can use a DHCP server to issue these values.
That server is given a number of addresses in a range that it can supply to clients.
For example, the server may be given the IP range (or scope) 192.168.12.1 to
192.168.12.200. When a client boots, it sends out a request for the server to issue it an
address (and any other configuration data) from that scope. The server takes one of the
numbers it has available and leases it to the client for a length of time. If the client is still
using the configuration data when 50 percent of the lease has expired, it requests a renewal
of the lease from the server; under normal operating conditions, the request is granted.
When the client is no longer using the address, the address goes back in the scope and can
be issued to another client.
The primary purpose of DHCP is to lease IP addresses to hosts. The client contacts the
DHCP server and requests an address, and the DHCP server issues one to the client to use
for a period of time. This lease can continue to be renewed as long as the client needs it and
the server is configured to keep renewing it. When it gives the IP address, it also often
includes the additional configuration information as well: DNS server, router information, and
so on.
Subnet mask
Subnetting your network is the process of taking the total number of hosts available to you
and dividing it into smaller networks. When you configure TCP/IP on a host, you typically
need only give three values: a unique IP address, a default gateway (router) address, and a
subnet mask. The default subnet mask for each class of network is shown below.
Purists may argue that you don’t need a default gateway. Technically this is true if your
network is small and you don’t communicate beyond it. For all practical purposes, though,
most networks need a default gateway.
When you use the default subnet mask, you’re allowing for all hosts to be at one site and not
subdividing your network. Any deviation from the default signifies that you’re dividing the
network into multiple subnetworks.
Gateway
A gateway can have two meanings. In TCP/IP, a gateway is the address of the machine to
send data to that is not intended for a host on this network (in other words, a default
gateway). A gateway is also a physical device operating between the Transport and
Application layers of the OSI model that can send data between dissimilar systems. The best
example of the latter is a mail gateway—it doesn’t matter which two networks are
communicating; the gateway allows them to exchange e-mail.
A gateway, as it is tested on the exam, is the server (router) that allows traffic beyond the
internal network. Hosts are configured with the address of a gateway (called the default
gateway), and if they need to correspond with a host outside the internal network, the data is
sent to the gateway to facilitate this. When you configure TCP/IP on a host, one of the fields
that should be provided is a gateway field, which specifies where data not intended for this
network is sent in order to be able to communicate with the rest of the world.
Devices and computers connected to the Internet use a protocol suite called TCP/IP to
communicate with each other. The information is transmitted most often via two protocols,
TCP and UDP, as shown in the table below.
Network software applications use these protocols and ports to perform functions over the
Internet or over a network. Some network software applications include services to host a
web page, send email, and transfer files. These services may be provided by a single server
or by several servers. Clients use well-known ports for each service so that the client
requests can be identified by using a specific destination port.
To understand how networks and the Internet work, you must be familiar with commonly
used protocols and associated ports. Some uses of these protocols are to connect to a
remote network device, convert a website URL to an IP address, and transfer data files. You
will encounter other protocols as your experience in IT grows, but they are not used as often
as the common protocols described here.
The table below summarizes some of the more common network and Internet protocols and
the port number used by these protocols. Your understanding of these protocols will
determine how well you understanding how networks and internet work.
Every SOHO network needs a common connecting device, such as a router, switch or hub.
Routers are the best choice for small offices because they have the ability to identify the IP
address of each computing device. Good routers also include a firewall that helps prevent
attacks from files coming in from the Internet. Each computing device also needs a network
card and either network cabling or wireless network adapters. You also need a modem to
receive the signal from your Internet Service Provider. SOHO users need a cost-effective,
easy and quick installation of a small network.
A+ Exam Tip The A+ 220-801 and A+ 220-802 exams expect you to be able to install,
configure, and secure a SOHO wired and wireless router.
One of the major complaints about wireless networking, especially in the small office/home
office (SOHO) implementations, is that it offers weak security. In many cases, the only thing
you need to do to access a wireless network is walk into an unsecured WAP’s coverage
area and turn on your wireless device. Further, data packets float through the air instead of
traveling safely wrapped up inside network cabling. What’s to stop an unscrupulous PC tech
with the right equipment from grabbing those packets out of the air and reading that data?
Wireless networks use four methods to secure access to the network itself and secure the
data that’s being transferred. Changing the Service Set Identifier (SSID) parameter—also
called the network name—and default administrator password is the first step. Better WAPs
enable you to change the administrator’s user name too. You can tighten security even
further by employing MAC filtering to create a list of the machines that are permitted to
access a network or are denied access to a network. Enabling wireless encryption through
Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or WPA2 ensures that the
data packets themselves are secure while in transit.
Always change the default SSID to something unique, and change the administrator
password on the WAP right away. If you have the option to change the administrator user
name, do that as well. Configuring a unique SSID name and password is the first step you
should do to set up a wireless network. The default SSID names and passwords are well
known and widely available online. This is intended to make setting up a wireless network as
easy as possible, but can cause problems in places with a lot of overlapping wireless
networks. Each wireless network node and access point needs to be configured with the
same unique SSID name. This SSID name is then included in the header of every data
packet broadcast in the wireless network’s coverage area. Data packets that lack the correct
SSID name in the header are rejected.
If you don’t change the default SSID and password, as soon as a potential hacker picks up
the “Linksys” network that’s broadcasting madly, he’ll try to access the WAP using the
default password.
MAC Filtering
Most WAPs support MAC address filtering, a method that enables you to limit access to your
wireless network using the physical, hard-wired address of the units’ wireless network
adapters. MAC filtering is a handy way to create a type of “accepted users” or “denied users”
list to limit access to your wireless network. A table stored in the WAP—the access control
list (ACL)—lists the MAC addresses that are permitted to participate or that are excluded
from participating in that wireless network. An inclusive list is called a white list; a list of
excluded MAC addresses is called a black list. Any data packets that don’t contain the MAC
address of a node listed in the white list table are rejected. The reverse is true for a black list.
WEP
The next step up in wireless security is enabling WEP encryption. WEP encryption was
meant to secure data being wirelessly transmitted. WEP encryption uses a standard 40-bit
encryption to scramble data packets. Many vendors also support 104-bit encryption.
WPA
WPA was designed to address the weaknesses of WEP, and it functions as a sort of security
protocol upgrade to WEP-enabled devices. WPA uses the Temporal Key Integrity Protocol
(TKIP), which provides a new encryption key for every sent packet. This protects WPA from
many of the attacks that make WEP vulnerable. WPA also offers security enhancements
such as an encryption key integrity-checking feature and user authentication through the
industry standard Extensible Authentication Protocol (EAP). The use of EAP is a huge
security improvement over WEP. User names and passwords are encrypted and therefore
much more secure.
Even with these enhancements, WPA was intended only as an interim security solution until
the IEEE 802.11i security standard was finalized.
WPA2
Recent versions of Mac OS X and Microsoft Windows support the full IEEE 802.11i
standard, WPA2, to lock down wireless networks. WPA2 uses the Advanced Encryption
Standard (AES), among other improvements, to provide a secure wireless environment. If
you haven’t upgraded to WPA2, you should.
WPS
While most techs can configure wireless networks blindfolded, the thought of passwords and
encryption might intimidate the average user. If anything, most people plug in their wireless
router and go on their merry way. Because everyone should secure their wireless network,
the developers of Wi-Fi created Wi-Fi Protected Setup (WPS), a standard included on most
WAPs and clients to make secure connections easier to configure. WPS works in one of two
ways. Some devices use a push button, such as the one shown in below, and others use a
password or code.
This router is typical of many SOHO routers and is several devices in one:
1. As a router, it stands between the ISP network and the local network, routing traffic
between the two networks.
2. As a switch, it manages several network ports that can be connected to wired
computers or to a switch that provides more ports for more computers.
3. As a DHCP server, all computers can receive their IP address from this server.
4. As a wireless access point, a wireless computer can connect to the network. This
wireless connection can be secured using wireless security features.
5. As a firewall, it blocks unwanted traffic initiated from the Internet and provides
Network Address Translation (NAT) so that computers on the LAN can use private or
link local IP addresses. Another firewall feature is to restrict Internet access for
computers behind the firewall. Restrictions can apply to days of the week, time of
day, keywords used, or certain web sites.
6. As an FTP server, you can connect an external hard drive to the router, and the FTP
firmware on the router can be used to share fi les with network users.
The speed of a network depends on the speed of each device on the network and how well
a router manages that traffic. Routers, switches, and network adapters currently run at three
speeds: Gigabit Ethernet (1000 Mbps or 1 Gbps), Fast Ethernet (100 Mbps), or Ethernet (10
Mbps). If you want your entire network to run at the fastest speed, make sure all your
devices are rated for Gigabit Ethernet.
An example of a multifunction router is the Linksys E4200 by Cisco shown in below. It has
one port for the broadband modem (cable modem or DSL modem) and four ports for
computers on the network. The USB port can be used to plug in a USB external hard drive
for use by any computer on the network. The router is also a wireless access point having
multiple antennas to increase speed and range using Multiple In, Multiple Out (MIMO)
technology. The antennas are built in.
Terms to note:
CDMA (Code Division Multiple Access) - A method for transmitting multiple digital signals
simultaneously over the same carrier frequency (the same channel). Although used in
various radio communications systems, the most widely known application of CDMA is for
cellphones.
1) Which network device regenerates the data signal without segmenting the
network?
A. Hub
B. Modem
C. Switch
D. Router
2) When planning the network in a new building, the technician notes that the
company requires cabling that can extend up to 295 feet (90 m) with enhanced
bandwidth support at an affordable price. Which cable type will the technician pick
if the preference is for the most common type of cabling that is used on networks?
A. Category 3
B. Category 7
C. Category 5e
D. Category 6a
3) Which term is used to describe any device on a network that can send and receive
information?
A. Host
B. Workstation
C. Server
D. Console
E. Peripheral
4) Which standards organization publishes current Ethernet standards?
A. EIA/TIA
B. IEEE
C. ANSI
D. CCITT
5) Which technology requires customers to be within a particular range of the
service provider facility in order to be provided the maximum bandwidth for
Internet access?
A. Cable
B. DSL
C. ISDN
D. Satellite
6) A company has four sites: headquarters and three branch offices. The four sites
are interconnected through WAN connections. For redundancy, a full-mesh
topology is desirable for this WAN. How many individual WAN links are needed to
create a full-mesh topology?
A. 4
B. 6
C. 8
D. 12
7) Which statement describes the physical topology for a LAN?
A. It depicts the addressing scheme that is employed in the LAN.
B. It describes whether the LAN is a broadcast or token-passing network.
C. It defines how hosts and network devices connect to the LAN.
PRINTERS
Objectives
Introduction
Printer Characteristics
Print Technologies
Impact printers
Non-impact Printers
Printer Control
Installing and Configuring Printer Upgrades
Printer Interface Types
Configuring Options and Device Settings
Labs
Introduction
A printer is an electromechanical device which converts the text and graphical documents
from electronic form to the physical form. Generally they are the external peripheral devices
which are connected with the computers or laptops through a cable or wirelessly to receive
input data and print them on the papers.
A wide range of printers are available with a variety of features ranging from printing black
and white text documents to high quality colored graphic images.
Quality of printers is identified by its features like color quality, speed of printing, resolution
etc. Modern printers come with multipurpose functions i.e. they are combination of printer,
scanner, photocopier, fax, etc. To serve different needs there are variety of printers available
that works on different types of technologies.
Printer Characteristics
ii. Speed: Measured in characters per second (cps) or pages per minute (ppm), the
speed of printers varies widely. Daisy-wheel printers tend to be the slowest, printing
about 30 cps. Line printers are fastest (up to 3,000 lines per minute). Dot-matrix
printers can print up to 500 cps, and laser printers range from about 4 to 20 text
pages per minute.
iii. Impact or non-impact: Impact printers include all printers that work by striking an
ink ribbon. Daisy-wheel, dot-matrix, and line printers are impact printers. Non-impact
printers include laser printers and ink-jet printers. The important difference between
impact and non-impact printers is that impact printers are much noisier.
iv. Graphics: Some printers (daisy-wheel and line printers) can print only text. Other
printers can print both text and graphics.
v. Fonts: Some printers, notably dot-matrix printers, are limited to one or a few fonts. In
contrast, laser and ink-jet printers are capable of printing an almost unlimited variety
of fonts. Daisy-wheel printers can also print different fonts, but you need to change
the daisy wheel, making it difficult to mix fonts in the same document.
Print Technologies
A printer outputs data that is seen on the computer screen on to a paper. Most printers are
used through a parallel port, but some newer ones use USB connections. The most crucial
printer measurement is its dots per inch rating. Printers are best chosen by actually seeing
the quality of the printer output. There are many types of print technologies like Daisy wheel,
Dot matrix, Laser, Inkjet etc. Printers are normally categorized into impact and non-impact
types.
Non-impact
The non-impact printers include those printers that do not have any kind of contact with the
paper while printing either text or image. For Example: Inkjet, Laser, Bubble Jet, etc. These
printers use different technology to print an image. For example, a laser printer uses heat to
attach microscopic particles of dry toner to specific parts of the page. An inkjet printer has
tiny nozzles through which it sprays droplets of ink onto the page.
Impact Printers
J G
M
A N
E D
F I
L
K B
Inside The Solid Ink Printer
A Solid Ink Printer is fast, has lower cost per page and the color sticks are easier to load
owing to its unique shape-coded and numbered ink stick which ensure the right color goes
only in the right place.
a) Front Panel Display: Intuitive front panel interface eases installation, helps with the
management and troubleshooting of the printer and gives local access to advanced
features.
b) Paper Trays: It has large paper capacities with a 100-sheet multipurpose tray (Tray
1) and a 525-sheet main tray (Tray 2). Additional trays may be added. Supported
media sizes range from 3.5" x 5" to 8.5"x14" legal size pages.
Printing Procedure
The pins in the print head are coiled by wire and are held in the rest position by a
small magnet and a spring.
The printer controller activates a particular pin by sending a signal to the Print Head.
The print head in turn stimulates the wires around the appropriate print wire turning
the print wire into an electromagnet.
This electromagnet fires the print pin, thrusting it into the ink ribbon and leaving a dot
on the paper.
i. Speed: Given in characters per second (cps), the speed can vary from about 50 to
over 500 cps. Most dot-matrix printers offer different speeds depending on the quality
of print desired.
ii. Print quality: Determined by the number of pins (the mechanisms that print the
dots), it can vary from 9 to 24. The best dot-matrix printers (24 pins) can produce
near letter-quality type, although you can still see a difference if you look closely.
iii. Noise: Compared to laser and ink-jet printers, dot-matrix printers are notorious for
making a racket.
However, there is one redeeming point. Dot matrix printers have the ability to print carbon
copies of the original document. So, when it comes to printing multiple copies of the same
document simultaneously then dot matrix printers' scores higher than any other printer which
are using technology other than that used by these printers.
Daisy wheel
Working of daisy wheel printers is very similar to typewriters. A circular printing element
(known as daisy wheel) is the heart of these printers that contains all text, numeric
characters and symbols mould on each petal on the circumference of the circle. The printing
element rotates rapidly with the help of a servo motor and pauses to allow the printing
hammer to strike the character against the paper.
Non-Impact Printers
Ink Dispersion
Ink Dispersion, like the name suggests, non-impact printers disperse ink through tiny nozzles
onto the paper. This is the primary way of printing adopted by a majority of non-impact
printers. Examples of printer technologies that employ ink dispersion are Ink Jet and Bubble
Jet printers.
Inkjet Printer
An Inkjet printer is a complicated assortment of a number of components. These
components fall under two categories -
Print Assembly
Print head: The print head has a series of nozzles that are used to spray dots of ink
on paper.
Ink Cartridge: This acts as a store or a repository for ink.
Stepper Motor: The Stepper Motor transports the print head assembly (Print head and ink
cartridge) from one corner of the page to the other.
Belt: Keeps the print head assembly and stepper motor securely fastened to
one another.
Stabilizer Bar: Makes sure that the print process goes through with precision, accuracy
and control.
Most color bubble-jet printers include multiple print heads, one for each of the CMYK (cyan,
magenta, yellow, and black) print inks. The print cartridge must be replaced as the ink supply
runs out. Inside the ink cartridge are several small chambers. At the top of each chamber are
a metal plate and a tube leading to the ink supply. At the bottom of each chamber is a small
pinhole. These pinholes are used to spray ink on the page to form characters and images as
patterns of dots, similar to the way a dot-matrix printer works but with much higher
resolution.
A Transducer is located at the back of the reservoir of each nozzle. The transducer receives
a tiny electric charge that causes it to expand. When the transducer expands inward, it
forces a tiny amount of ink out of the nozzle. When it recedes out, it pulls some more ink into
the reservoir to replace the ink sprayed out.
There are mainly two technologies that are used to spray the ink by nozzles. These
are:
· Thermal Bubble – This technology is also known as bubble jet is used by various
manufacturers like Canon and Hewlett Packard. When printer receives commands to
print something, the current flows through a set of tiny resistors and they produce heat.
This heat in turn vaporizes the ink to create a bubble. As the bubble expands, some of
the ink moves out of the nozzle and gets deposited over the paper. Then the bubble
collapses and due to the vacuum it pulls more ink from ink cartridge. There are generally
300 to 600 nozzles in a thermal printer head which can spray the ink simultaneously.
An inkjet printer can print 100 to several hundred papers depending on the nature of the
hard copy before the ink cartridge need to be replaced.
A typical inkjet receives control information from the printer driver/PC, or may process the
printout in its onboard electronics. Either way, rollers advance a page from your paper tray
(1) under a sliding printhead/cartridge assembly (2). Then, the printhead stepper motor (3)
kicks in, drawing the assembly on a sliding rod (4) to its starting position, usually via a belt
(5).
The printhead (6) proper is an incredible piece of miniaturization, in some cases fabricated
via an etching process similar to semiconductor manufacture. On some printers, the head
and ink cartridge (7) are one unit. The head's microscopic nozzles (8)—anywhere from
dozens to literally thousands—are outlets for incredibly tiny ink chambers (9), which are fed
by the cartridge's reservoirs. Microscopic droplets (10), measured in millionths of a millionth
of a liter fire through the nozzles.
Most inkjets (Epsons exempted) use "thermal" technology in which a tiny resistor (11) in an
ink chamber is pulsed, as needed, with intense current, superheating the ink and vaporizing
part of the droplet. This builds up terrific pressure that blasts the ink out of the nozzle onto
your page.
(Epson employs a piezoelectric process in which applying current to a crystal in an ink
chamber causes it to oscillate, ejecting the ink.) Capillary action then draws new ink into the
A given chamber can repeat the heating/firing/cooling cycle thousands of times per second.
Laser Printer
Laser printers are very popular in offices, but not so much for home use due to their initial
cost and cost of consumables (items which must be periodically replaced). Laser printers
use dry ink, called toner, static electricity, and heat to place and bond the ink onto the paper.
This is known as the electro-photographic process.
The main principle behind the working of a laser printer is Static Electricity. It works on the
principle that atoms with opposite charges attract each other.
The Laser Printer makes use of this energy as temporary glue. The most important part of
the laser printer without which it cannot function is something known as the photoreceptor.
The photoreceptor basically consists of a cylinder or a drum that revolves. Using highly
photoconductive material that is dispersed by light photons creates the drum or the cylinder.
Before we examine the steps of this process, let's first take a look at some of the
components of a laser printer:
Cleaning Blade - This rubber blade or felt pad removes excess toner off the drum
after the print process has completed.
Photosensitive Drum - The core of the electro-photographic process. This component
should not be exposed to light and needs to be replaced periodically. Also known as
an "imaging unit" or "imaging kit".
Primary Corona Wire - Highly negatively charged wire erases the charge on the
photosensitive drum to make it ready for another image. Needs to be cleaned
periodically.
Transfer Corona - A roller that contains a positively charged wire to pull the toner off
the photosensitive drum and place it on the page.
Toner - Plastic resin that is the ink for a laser printer. Naturally negatively charged.
Fusing unit - Bonds the toner particles to prevent smearing. Uses heat to bond.
Needs to be replaced periodically as the fusing platens (rollers) get worn down. Often
Each of these parts has a very important role to play in the printing process.
In most cases, your PC talks with controller circuitry (1) in the laser printer to queue up
and translate printing data; a raster image processor (RIP) converts images and text into a
virtual matrix of tiny dots.
The main actor, however, is the photoconducting drum (2), a specially coated cylinder that
receives a positive or negative charge from a charging roller (3) (or, in some printers, a
corona wire). A laser beam (4), switching rapidly on and off and deflected off a rotating
mirror (5), scans the charged drum horizontally in precise lines. When the beam flashes on,
it reverses the charge of tiny spots on the drum, corresponding to dots that are to be printed
black. After the laser scans a line, a stepper motor advances the drum, and the laser repeats
the process—all, of course, blindingly fast.
Next, the drum's laser-kissed portion encounters the developer roller (6), which is coated in
charged toner particles from the toner hopper (7), part of the toner cartridge. Charged toner
clings to the discharged areas of the drum, reproducing, in reverse, your images and text.
Last, your page, with its imprint of tenuously anchored toner, reaches the fuser (12)—a heat
roller and a pressure roller. It melts the toner, which contains resins and sometimes wax,
onto the page. Voila, pages in your out tray.
Thermal Printer
A thermal Printer is a type of printer that produces a printed image by selectively heating
coated thermo chromic paper, or thermal paper as it is commonly known, when the paper
passes over the thermal print head. The coating turns black in the areas where it is heated
which then produce an image. It uses heat to transfer an impression onto paper.
There are three thermal printing methods: direct thermal, dye sublimation and thermal
transfer. Each method uses a thermal printhead that applies heat to the surface being
marked. Thermal transfer printing uses a heated ribbon to produce durable, long-lasting
images on a wide variety of materials. No ribbon is used in direct thermal printing, which
creates the image directly on the printed material. Direct thermal media is more sensitive to
light, heat and abrasion, which reduces the life of the printed material. Some thermal printers
use heat-sensitive paper, while others use an ink ribbon to create the image.
Thermal transfer printers used in point-of-sale or retail environments typically use non-
impact dot-matrix printheads.
For portable printers using direct thermal printing such as the Brother (formerly Pentax)
PocketJet series, the usual source for such paper is the printer vendor or its authorized
resellers. If the direct thermal printer is used for bar codes or point-of-sale transactions, you
can get suitable paper or label stock from bar code or POS equipment suppliers and
resellers.
If the printer uses thermal transfer and is not designed for photo printing, most smooth paper
and label stocks are satisfactory, including both natural and synthetic materials. However,
dye-sublimation photo printers must use special media kits that include both a ribbon and
suitable photo paper stocks.
Exercise
Research on the printers that have photocopiers and scanners e.g. 3 in 1 printer.
Also research on how colored printers for laser and inkjet work.
Printer Control
Drivers
Printer drivers are used to control and configure printers and the print features of
multifunction devices.Printer drivers might be supplied by Microsoft on the Windows
distribution media or by the printer vendor itself. Generally, printer drivers provided by the
printer vendor offer more configuration options and utilities for cleaning and maintenance
than the drivers provided by Microsoft. Note that driver features might vary by Windows
version, even if a single driver file supports more than one Windows version.
It’s a good idea to check for updated versions of printer or multifunction device drivers before
installing the device and periodically thereafter. Updated drivers might include bug fixes or
enhanced features. Also, if a system is upgraded to a newer version of Windows or another
operating system, you will need new drivers. You can use Windows Update to locate drivers
or download them directly from the printer vendor.
Labs
Installing Device Drivers
The exact method you should use depends upon the printer. Consult the support website for
the printer to determine if a firmware update is available, what its benefits are, and how to
install it.
Some printers, most often laser printers, might offer upgradeable firmware. Firmware, which
is software on a chip, controls the basic operation of a printer. To determine the firmware
revision installed in a printer, use its self-test function to make a test printout. Firmware can
be implemented in a flash-upgradeable chip built into the printer, or in a special memory
module sometimes called a “personality” module.
Installing memory (RAM) and firmware upgrades can help printers provide better
performance, handle more complex documents, support new types of memory cards and
interfaces, and enhance other printer functions. The following sections help you understand
these processes.
Memory
Laser and solid ink printers use memory modules to hold more page information, to reduce
or eliminate the need to compress page information when printing, or to enable higher-
resolution printing with complex pages. Most recent printers with upgradeable memory use
the DIMM memory module form factor, but printers do not use the same types of DIMMs as
desktop or laptop computers. To order additional memory, you can contact the printer
vendor or contact a third-party memory vendor that offers compatible memory.
Wired
USB 2.0—This is used by most inkjet, solid ink, dye-sublimation, thermal, and laser printers,
either when connected directly to a PC or connected to a network via a print server. Also
used by most multifunction (all-in-one) units.
Parallel—Used by legacy impact printers as well as older inkjet and laser printers. Also used
by some legacy all-in-one units.
RS-232 (serial) — Serial interfaces are used by legacy impact and laser printers.
Ethernet —Ethernet interfaces are used by inkjet and laser printers that are network-ready.
To add support for Ethernet local area networking to a printer that does not have a built-in
Ethernet port, connect it to an Ethernet print server. Print servers are available in versions
that support USB or parallel printers, and they enable the printer to be accessed via the print
server’s IP address.
Front and rear views of an Ethernet print server that supports USB printers.
When a printer includes its own Ethernet port, it is assigned an IP address on a TCP/IP
network; similarly, a print server is also assigned an IP address on a TCP/IP network. To
configure a printer for network use, you might need to install a network printer driver instead
of the normal printer driver; you will also need to specify whether the printer has a manually-
assigned IP address or receives an IP address from a DHCP server on the network.
SCSI— Used by high-end Postscript laser printers.
Printer options can be changed by the user before printing a document. However, if the
printer will be used in the same way most of the time, it can be useful to configure the device
with the most commonly-used settings.
Printer Options
Printer options are configured through the printer’s properties sheet. You can access printer
properties sheets by doing one of the following:
a) Right-clicking the printer’s icon in the Printers, Printers and Faxes, or Devices and
Printers folder in Control Panel and selecting Properties. Use this method to set
defaults that will be used for all print jobs.
b) Opening the Print dialog in an application and clicking the Properties button. Use
this method to change settings for the current print job.
If the printer uses a Microsoft-supplied printer driver, the properties sheet will have some or
all of the following tabs:
General—Features the Print Test Page button, which prints a test page of graphics
and text, listing the driver files, and the Printing Preferences button, which opens the
Printer Preferences menu.
Sharing—Enables or disables printer sharing over the network. In Windows 2000/XP
this is available only if File and Print Sharing is enabled on the system. To enable
Printer Sharing in Windows 7/Vista go to Start, Control Panel, Network and
Sharing Center, click on the down arrow for Printer Sharing, and select the radio
button labeled Turn on Printer Sharing.
The Sharing tab also features the Additional Drivers button. Once configured by the
local user, this permits remote users to connect to the printer with other versions of
Windows. If this feature is not configured, users running other versions of Windows
must download and install the appropriate driver for their version of Windows before
they can connect to a remote printer.
Ports—Lists and configures printer ports and paths to network printers.
Advanced—Schedules availability of printer, selects spooling methods, printer
priority, print defaults (quality, paper type, orientation, and so forth), printer driver,
print processor, and separator page.
Tip
Some laser printers report the amount of memory installed to the operating system so that
the properties sheet reflects this information. However, you should not assume that all laser
printers do so. Be sure to verify that the memory size shown in the printer properties sheet is
accurate. If not, change it to match the installed memory size.
To determine the installed memory size, use the printer’s own print test option.
The selections made on these tabs are automatically saved as the defaults when you click
OK and close the dialog.
The Printer Preferences button on the General tab opens the preferences menu for the
printer. The preferences menu can vary a great deal from printer to printer, but typically
includes options such as these:
• Inkjet printers—Paper type, paper size, paper layout, print mode, utilities (head cleaning,
alignment, ink levels), and watermarking.
• Laser printers—Layout, page order, resolution, font substitutions, printer features, pages
per sheet, and watermarking.
Labs
Before sharing the printer, ensure that the operating systems sharing settings allow this. This
is achieved by following the following set of steps.
1. Install the printer drivers. In order to share a printer, it must be installed on the
computer it is connected to. Most modern printers connect via USB and will install
automatically when they are connected.
2. Open the Control Panel. You can access the Control Panel in Windows 7 by
clicking the Start menu and selecting Control Panel. In Windows 8, press Win+X and
select Control Panel from the menu.
3. Open the Network and Sharing Center. If your Control Panel is in Category view,
click "Network and Internet", and then select "Network and Sharing Center”. Click on
"Network and Internet". If your Control Panel is in Icon view, click the "Network and
Sharing Center" icon.
4. Click the "Change advanced sharing settings" link. This is located in the left
navigation pane of the Network and Sharing Center.
5. Expand the profile you need to change. You will see three different options when
you open the "Advanced share settings": Private, Guest or Public, and All Networks.
If you are on a Home network, expand the Private section.
6. Enable "File and printer sharing". Toggle this on to allow other devices to connect
to your printer. This will also allow you to share files and folders with other computers
on the network.
Now that file and printer sharing has been turned on, you will need to share the printer
itself.
2. Right click on it and select Printer properties. If you have a multifunctional printer
installed that works also as a fax and/or scanner, then you may be asked to choose
which kind of properties you want to select. Select the printer properties.
3. The printer's Properties window is opened. Depending on the printer model and its
drivers, you will see different tabs and options. Go to the Sharing tab, which is
common to all printers.
4. Here you can share the printer with the entire network. Check the box which says
"Share this printer". Then, you can edit the share name of the printer, in case you
don't want to use the default provided by Windows.
5. Rendering all print jobs on client computers can help keep performance levels up on
the computer where the printer is plugged in, especially when big printing jobs are
ordered. Check the "Render print jobs on client computers" box if you want this
feature enabled.
6. When done, click on OK. The printer is now shared with the computers on your
network, indifferent of the operating systems they are using.
Task:
What is the procedure for sharing a local printer via operating system settings?
1. What is the major difference between a laser printer and an LED printer? (Choose
all that apply.)
A. LED printers use an LED array to perform the transfer of images.
B. LED printers use an LED drum.
C. Laser printers are of better print quality.
D. Laser printers use a laser to transfer the image to the drum.
2. What happens if a page you print on a laser printer requires more memory than
the printer has installed?
A. It will use hard drive space to print.
B. The printer will try to print the page but will stop before the job is finished.
C. The printer will continue to work but at a slower than normal pace.
D. The printer will notify you that you need to free some resources.
3. Which of the following will provide you with the most configuration options and
utilities when installing a new printer?
A. A driver from Microsoft
B. A plug-and-play driver
C. A driver from the printer vendor
D. A driver from Automatic Updates
4. Which of the following must you do to determine what firmware version the printer
is using?
A. Print a test page
B. Look on the back of the printer
C. Review the printer properties page
D. Review the firmware update page
5. Which of the following are true about the laser printing process? (Choose all that
apply.)
A. A page does not start printing until the entire page is received.
B. The print is transferred to the paper.
C. The page is transferred to the print mechanism.
D. The page will start printing immediately after the print button is pushed.
6. Most inkjet, laser, and thermal printers use this interface to connect a printer to a
computer. (Choose two.)
A. RJ-45
B. USB
C. Parallel
D. LED
7. You must provide which of the following in order to add a printer to a network?
A. A printer device
B. A print server
C. A printer NIC card
D. A Bluetooth adapter
8. When installing a printer, what is the easiest way to ensure compatibility to make
sure that Windows recognizes the new hardware?
A. Windows Update
B. A search engine
C. Vendor’s website
D. Hardware compatibility list
9. Where would you find the setting for the print spooler on a Windows operating
system?
A. Computer Management
B. The Printers folder
C. In Server Properties
POWER SUPPLY
Objectives
Introduction
Power Supply Form Factors
Power Supply Connectors
Multivoltage Power Supplies
The Multimeter
Labs
Introduction
A power supply or the power supply unit (PSU) is the component that provides power for all
components on the motherboard and internal to the pc case. Every PC has an easily
identifiable power supply which is typically located at the back of the computer’s interior, and
it is visible from the outside at the back of the PC.
The below diagram shows the interior of the PC with a power supply on the top left. Notice
the bundle of cables coming out of the power supply. They are used to supply power to the
motherboard and all other internal components.
Power supplies manufactured for sale throughout the world will have a switch on the
back of the power supply for selecting the correct input power setting.
The tiny red switch, officially called IEC 320 connector is used in selecting 115VAC or
240 VAC.
Power cord
Connector
On/Off Switch
Voltage selector
The power supply also shows (on a label) the amount of wattage it can supply. Below
is a sample of a PSU label.
If a device does not state the wattage required, simple math will give you the answer.
Multiply the volts and amps values and you will have the wattage of the device.
Task
Check Out the Wattage on PCs and Other Devices
i. Look at the back of a PC and find the label on the power supply, and record it.
ii. Do the same on other devices that are available to you, such as the displays,
printers, and scanners.
iii. Similarly, check out the wattage on non-computer devices in the classroom or at
home.
3. Another function of the power supply is to dissipate the heat it and other PC components
generate. Heat buildup can cause computer components (including the power supply) to
fail. Therefore, the power supply has a built-in fan that draws air in from outside the
computer case and cools the components inside.
4. In addition to supplying electrical power to run the system, the power supply ensures that
the system does not run unless the voltages supplied are sufficient to operate the system
properly. It prevents the computer from starting up or operating until all the power supply
voltages are within the proper ranges. The power supply completes internal checks and
tests before allowing the system to start. If the tests are successful, the power supply
sends a special signal to the motherboard called Power_Good. This signal must be
continuously present for the system to run. Therefore, when the AC voltage dips and the
power supply can’t maintain outputs within regulation tolerance, the Power_Good signal
is withdrawn (goes low) and forces the system to reset. The system does not restart until
the Power_Good signal returns.
Exam Tip: The power connector for the AC plug on the back of a power supply is called an
IEC 320 connector.
Form Factors
Just like motherboards and cases, computer power supplies come in a variety of sizes and
configurations. All the popular PC power supply form factors up through 1995 were based on
one of three IBM models, including the PC/XT, AT, and PS/2 Model 30. The interesting thing
is that these three power supply definitions all had the same motherboard connectors and
pinouts; where they differed was mainly in shape, maximum power output, the number of
Intel gave the power supply a new definition in 1995 with the introduction of the ATX form
factor. ATX became popular in 1996 and started a shift away from the previous IBM-based
standards. ATX and the related standards that followed have different connectors with
additional voltages and signals that allow systems with greater power consumption and
additional features that would otherwise not be possible with the AT style supplies
The most commonly used in most cases is the ATX power supply, except the smallest,
which have low-profile power supplies for the low-profile cases, and the largest which have
jumbo-sized power supplies for the full-tower cases.
Note that although the names of the power supply form factors seem to be the same as
those of motherboard form factors, the power supply form factor is more related to the
system chassis (case) than the motherboard. That is because all the form factors use one of
only two main types of connector designs: either AT or ATX, with subtle variations on each.
So, although a particular power supply form factor might be typically associated with a
particular motherboard form factor, many other power supplies would plug in as well.
AT
The AT power supply preceded the ATX form factor. The AT differs from ATX by requiring
two connectors for the motherboard. These connectors, P8 and P9, provide only ±12V and
±5Vpower to the motherboard, unlike newer power supplies which also have ±3.3V output.
The fans in AT power supplies blow exhausted air out of the case, drawing it in through
vents on the front of the case.
ATX
The ATX (Advanced Technology eXtended) standard, introduced by Intel in 1995, has
governed the way in which power supplies have evolved in recent years. Launched as an
improvement over the previous AT (Advanced Technology) standard, ATX requires the
power supply to produce three DC power outputs; +3.3V, +5V and +12V, and features two
major design alterations.
Unlike AT-based computers, where a chassis' power button was connected directly to the
PSU, ATX introduced a system where the chassis' power-switch is connected to the
motherboard via a wire typically labeled Power SW - allowing other hardware/software to
wake the machine. In addition to this change, the power supply's primary connection to the
motherboard was changed to a large, 20-pin, keyed connector to prevent any potentially-
hazardous mix-ups.
Power supplies connectors are vital hardware cables and buses for transferring power to
various components in the computer. PC Main is the power connector, also called P1; it
connects to the motherboard and powers it. 12 Volts only P10 for system monitoring is
responsible for supplying power to the Power Supply Unit's fan. ATX12V 4-pin cable is the
second one to connect to the motherboard. The 4-pin connector is for the powering of disk
drives. 4-pin power is for the Accelerated Graphics Port graphical cards. Auxiliary cables are
additional power supply units. SATA cable connectors are for the hardware devices that
SATA plugs into for power. 6-pin connector powers the computer for PCI Express audio-
The original PCs used two cables to connect the PSU (power supply) to the motherboard.
The two cables plug side by side into the motherboard connectors. Sometimes they are
keyed so they only plug in one way and sometimes they aren't. Even if they're keyed you
can insert them the wrong way if you put a little effort into it. You always have to remember
to plug them in so the black wires are next to each other.
In old PCs, almost all of the chips ran directly off of the 5 volt rail. As a result the PSU
delivers most of its wattage at 5 volts. There are three or four lines dedicated to the 5 volt
rail. The other main rail is 12 volts. That was used primarily to run disk drives, motors, and
fans. The two negative rails are bias supplies which only have to provide small amounts of
current.
Some of the voltage lines on the connector may have smaller sense wires which allow the
power supply to sense what voltage is actually seen by the motherboard. These are pretty
common on the 3.3 volt line in pin 11 but are sometimes used for other voltages too. The -5
volt line on pin 18 was made optional in ATX12V 1.3 (introduced in 2003) because -5 had
been rarely used for years. Newer motherboards virtually never require -5 volts but many
older motherboards do. Most newer power supplies don't provide -5 volts in which case the
white wire is missing.
ATX P4
The P4-ATX (ATX 12V), was introduced by Intel for Pentium 4 (hence the name): it plugs
into the motherboard and exclusively powers the processor. Today, most motherboards
possess 4 to 8 pins dedicated to powering up the CPU. The latest standards for power
supply make use of an 8-pin connector (sometimes called EPS 12V), made up of 2 x 4-pin
blocks, again to ensure compatibility with old motherboards and the classic ATX P4.
Divided P4-ATX
The mini connector is used primarily on 3.5-inch floppy-disk drives. It has four pin-outs and,
usually, four wires. Most are fitted with keys that make it difficult, but not impossible, to install
upside down. The four pin floppy drive cable showed up when PCs started including 3.5 inch
floppy drives. This kind of cable is also sometimes used as an auxiliary power cable for AGP
video cards which use more power than can be drawn from the motherboard slot. The
connector is shaped so that it only fits in one way so you don't have to worry about inserting
it the wrong way.
Molex
Connector
Mini
Connector
In the SATA power connector, each wire is connected to three terminal pins, and the wire
numbering is not in sync with the terminal numbering, which can be confusing. If your power
supply does not feature SATA power connectors, you can use an adapter to convert a
standard peripheral power connector to a SATA power connector. However, such adapters
do not include the +3.3 V power. Fortunately, though, this is not a problem for most
applications because most drives do not require +3.3 V and use only +12 V and +5 V
instead.
IEC Connector
IEC connector usually refers to the power supply inlet which is commonly seen on desktop
PC power supplies. The connectors are IEC 60320-1 C13 (female) and C14 (male).
Task:
Under the guidance of the trainer, the students should discuss the causes and remedies of
power supply overheating.
Most power supplies are designed to handle two different voltage ranges:
110–120V/60Hz
220–240V/50Hz
A dual voltage device can accept both 110-120V and 220-240V. Dual voltage is a term used
to describe any type of electronic device that is manufactured to recognize and use both
American and European currents without the need for an additional transformer. The
conversion is completed internally within the item, so consumers with dual-voltage devices
do not have to worry about where they will or will not work—however, a plug-in adaptor may
be required to use the item outside the country of origin and ensure proper safety. This type
of technology is especially popular in portable devices such as cellular phones, electric
razors, and music players since it allows their manufacturers to market their products to a
worldwide audience instead of just a specific region.
There are a number of ways to check if a certain item was constructed to meet dual-voltage
standards. There is normally a stamp on the plug itself or a laminated piece of paper
attached to the cord that will supply the accepted voltage rate. If the tag states only 110 or
240 volts, then the item is not dual voltage, but if it is specified as “110V-240V,” then it
should work with the various power sources worldwide. This information should also be
stamped somewhere on the electronic item itself, sometimes underneath the battery cover or
alongside the bottom of the device.
The Multimeter
A multimeter is one of the most flexible diagnostic tools for testing defective power supply.
Multimeters are designed to perform many different types of electrical tests, including the
following:
• DC voltage and polarity
• AC voltage and polarity
• Resistance (Ohms)
• Diodes
• Continuity
• Amperage
All multimeters are equipped with red and black test leads. When used for voltage tests, the
red is attached to the power source to be measured, and the black is attached to ground.
Multimeters use two different readout styles: digital and analog. Digital multimeters are
usually autoranging, which means they automatically adjust to the correct range for the test
selected and the voltage present. Analog multimeters, or non–autoranging digital meters,
must be set manually to the correct range and can be damaged more easily by overvoltage.
Multimeters are designed to perform tests in two ways: in series and in parallel. Most tests
are performed in parallel mode, in which the multimeter is not part of the circuit but runs
parallel to it. On the other hand, amperage tests require that the multimeter be part of the
circuit, so these tests are performed in series mode. Many low-cost multimeters do not
include the ammeter feature for testing amperage (current), but you might be able to add it
as an option.
8. Remove any screws holding the power supply in place inside the case. (Your PC might
not use these additional screws.)
9. Disconnect the power supply switch from the case front (if present).
10. Lift or slide the power supply from the case.
Given the following symptoms of power supply, what could be the source of the
problem?
1. The power light is off and/or the device won’t turn on.
2. The power supply fan does not turn when the computer is powered on.
3. The computer sounds a continuous beep.
4. When the computer powers on, it does not beep at all.
5. When the computer powers on, it sounds repeating short beeps.
6. The computer reboots or powers down without warning.
7. The power supply fan is noisy.
8. The power supply is too hot to touch.
9. The computer emits a burning smell.
10. The power supply fan spins, but there is no power to other devices.
11. The monitor has power light, but nothing appears on the monitor, and no PC power
light illuminates.
1. There are two adverse AC power conditions that can damage or adversely affect a
computer: overvoltage and undervoltage.
a) Describe the two power conditions?
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
___________________________________________________________________
2. The above diagram shows an antistatic wrist wrap in use. What is the gadget used
for? What other gadgets can be used in its place?
______________________________________________________________________
______________________________________________________________________
8. What is the maximum rated output power of the power supply in watts?
______________________________________________________________________
______________________________________________________________________
______________________________________________________________________
9. What types of connectors attach the ATX power supply to a modern ATX
motherboard? (Select two.)
A. P1
B. PDC
C. P4
D. P9
10. Molex connectors supply electricity to which devices? (Select two.)
A. Floppy drives
B. Hard drives
C. Optical drives
D. Thumb drives
11. A client notices the 115/230 switch on the back of a power supply you’re installing.
What’s the switch used for?
PORTABLE COMPUTERS
Objectives
Introduction
Laptops and Notebooks
Handheld Computers
Laptop Hardware
Laptop Features
Expansion Slots
Laptop Displays
Power and Electrical Input Devices
Labs
Introduction
A portable computer is any type of computer that you can easily transport and that contains
an all-in-one component layout. In addition to the size difference, portable computers differ
from desktop computers in their physical layout. Although portable computers are
functionally similar to desktop computers, they are primarily proprietary and tend to use
specialized components that are generally miniaturized versions of those described in earlier
chapters. Repairing portable systems usually requires a skill set beyond what is necessary
to service desktop PCs. Portable computers fall into two broad categories: laptops and
handhelds.
A laptop, like other portable computers, is a small, easily transported computer. Laptops
generally weigh less than 7 pounds, can fit easily in a large bag or briefcase, and have
roughly the same dimensions as a 1- to 2-inch thick stack of magazines.
Laptops computers have an all-in-one layout in which the keyboard, and often the pointing
device, is integrated into the computer chassis and an LCD display is in a hinged lid. The top
contains the display, and the bottom contains the keyboard and the rest of the computer’s
internal components.
Laptops use liquid crystal display (LCD) displays, and they use small credit-card-sized
specialized integrated circuit cards, called PC Cards, rather than the expansion cards used
in PCs. Additionally, a laptop has a built-in battery allowing it to be used for short periods
without an outside power source. The battery life of a laptop depends on the battery and how
the laptop is used, but under normal use, it will last from two to six hours. Truly designed for
the mobile lifestyle of many users today, some laptops, called ultra-portables, weigh less
than three pounds, and they may even give up features to keep the weight down and
maintain the highest battery life. The typical laptop user also has a desktop PC at the office.
Often “laptop” is used interchangeably with “notebook.” Initially, people called most portable
computers laptops because they could fit comfortably on the user’s lap, although early
laptops were a little heavy to do this comfortably. As technology improved, laptops became
smaller and smaller, and the term notebook came into use to reflect this smaller size.
The main components of the laptop include the liquid crystal display (LCD), keyboard (with
special Fn key), touch pad, the power button, and extra buttons. The extra buttons offer
additional functionality, for example; enabling and disabling wireless, turning the sound on or
off, and opening applications such as Internet Explorer and Outlook.
A laptop manufacturer needs to squeeze in ports wherever they can find space, so quite
often you find ports on three of the sides of the laptop as shown below.
Handheld computers come in a variety of types. The most common are personal digital
assistants (PDAs), but many cell phones now have PDA functions built in and people call the
fanciest of these smart phones. There are also specialized handheld computers developed
for specific job functions.
The two most common operating systems used in PDAs today are the Microsoft Pocket PC
operating system and the Palm operating system. In general, these two operating systems
do not support interchangeable applications, though most PDA software manufacturers now
support both platforms.
One of the most common uses for a PDA is as an organizer used to keep track of
appointments and record addresses, phone numbers, and notes. Many PDAs also allow you
to plug into a fax machine to send faxes, or to connect wirelessly to a network to send e-mail
or access the Internet.
PDAs are too small to include a regular keyboard layout, so the primary input device is a
small stylus, shaped like a pen, which you use to press small keys on a virtual keypad, tap
the screen to select items, or write data on the screen. PDAs are entirely proprietary, non-
serviceable by regular technicians, and they differ from one another in their technology and
functions.
Laptop Hardware
Because laptops are largely proprietary, motherboards and other components are not
interchangeable between brands. Not only are they proprietary by manufacturer, their
Motherboard
While laptops run standard PC operating systems and applications, the motherboards do not
comply with the form factor standards found in PCs. The miniaturization required for laptops
has inspired manufacturers to come up with largely proprietary motherboard form factors.
Motherboards (also called processor boards) contain specialized versions of the
components you would expect to find in a desktop PC, such as the CPU, chipset, RAM,
video adapter (built-in), and expansion bus ports.
To save space, components of the video circuitry (and possibly other circuits as well) are
placed on a thin circuit board that connects directly to the motherboard. This circuit board is
often known as a riser card or a daughterboard.
Having components performing different functions (such as video, audio, and networking)
integrated on the same board is a mixed bag. On one hand, it saves a lot of space. On the
other hand, if one part goes bad, you have to replace the entire board, which is more
expensive than just replacing one expansion card.
CPU
Both Intel and AMD have a number of CPUs designed especially for laptops. Intel includes
the word “mobile” in some model names, and simply uses an “M” in other model names.
AMD currently uses an “M” in the model name. These chips include the Intel Pentium 4-M
and the AMD Athlon XP-M.
Both manufacturers include special mobile computer technologies in their mobile CPUs,
such as power saving and heat-reducing features, and throttling that lowers the clock speed
and supply voltage when the CPU is idle.
Just as there are different sizes of laptops, there are different models of mobile processors.
These include high-performance models requiring more power that are appropriate for
desktop replacement, and CPUs that run at lower voltage and reduced clock speeds to give
the best battery life. The latter CPUs may have “Low Voltage” or “LV” in the model name.
Many mobile CPUs support Wi-Fi wireless networking.
Video Adapter
As in a desktop PC, a video adapter controls a laptop’s video output. Typically, the video
adapter for a laptop is integrated into the motherboard and is not upgradeable. In most
cases, a faulty video adapter requires manufacturer replacement of the motherboard.
Memory
Laptop memory modules come in small form factors, the most commonly used is Small
Outline DIMM (SODIMM), which are about half the size of DIMM modules. SODIMM
modules initially had 30 pins, and the next generation had 72 pins. These had a data bus
width of 8 bits and 32 bits, respectively, per module.
A 200-pin
SODIMM
module
Over the years, SODIMM modules have improved their data width progressing through form
factors with 100, 144, and 200 pins. Notches prevent you from installing a module in the
wrong orientation to the SODIMM memory slot. The 200-pin SODIMM in Figure 5-1
measures 2 5/8" wide.
Another type of laptop memory module is Small Outline RIMM (SORIMM), which comes in
120-pin and 160-pin forms. MicroDIMM, a RAM module designed for subcompact and laptop
computers, is half the size of a SODIMM module and allows for higher density of storage.
Lab
Installing SODIMM Memory
For this exercise, you will need a module of SODIMM memory appropriate for your laptop,
the user’s manual, the new module in an antistatic bag, and a small nonmagnetic
screwdriver for opening the case. If you do not have a new module, simply remove a module
that is already installed, and reinstall it. In this case, you will need to have an antistatic bag.
Hard Drives
Laptop hard drives are typically 2.5" wide versus the 3.5" hard drive currently widely used in
desktop PCs. The interface typically is either PATA or SATA. There are also hard drives that
you can install into the PC Card expansion bays.
Many solid state drives (SSDs) come in 2.5-inch versions and can be used in place of the
original drive or as an additional drive. These use the standard Serial ATA (SATA) interface
and use the same type of flash memory used with Universal Serial Bus (USB) flash drives. A
great benefit of SSD drives is that they are less susceptible to damage if the drive is
dropped.
Laptop Features
Laptops can use just about any external peripheral that a PC can use, unless it relies on
inserting a full-sized expansion card. You can attach a full-sized external display to a laptop
and any printer or scanner that can use one of your laptop’s interfaces, such as USB,
FireWire, or one of the PCMCIA interfaces.
We discussed common PC peripherals in earlier chapters. Therefore, we’ll now discuss
those peripherals that are specifically designed for laptops: port replicators, docking stations,
and those that fit into specialized media/accessory bays.
To accommodate extended functionality, you require a docking station. Docking stations are
used to extend the functionality of light-weight portable laptops so that it is more of a desktop
replacement, for use at home. Docking stations provide additional ports as well as slots for
inserting modules such as hard disk or a CDROM drive. A docking station typically support
devices like: hard drive bays, optical drive bays, keyboard/mouse connectors (PS/2 ports),
additional USB ports, PC Card slots, external display connectors, and others.
The primary difference between a docking station and a port replicator is that the former will
have both additional ports as well as bays to insert additional (spare) components whereas
the latter will have only additional ports. Given below is a docking station from IBM.
Docking stations usually provide power to the laptop. If the laptop is on low battery while it is
on the docking station, it indicates that it is not connected.
Note that most of the docking stations are product specific, meaning that they support only a
given set of laptop makes. There is no vendor neutrality. Therefore, when you buy a docking
station, ensure that it is compatible with your laptop computer.
Usually, a notebook computer will have limited I/O ports due to compact design. A port
replicator provides additional I/O ports for adding additional devices such as a printer or a
scanner. You can attach a laptop to a port replicator to provide much needed additional
ports. A typical port replicator diagram is shown below:
Cable Locks
Portability defines what makes laptops truly useful. They’re not as fast or as strong as their
desktop cousins, but the fact that we can easily haul them around wherever we go gives
them a special status within our hearts. It also presents less-scrupulous types with ample
opportunity to quickly make away with our prized possessions and personal data. Laptop
theft is a major concern for companies and individual owners alike.
One way you can help physically secure your laptop is through the use of a cable lock.
Essentially, a cable lock anchors your device to a physical structure, making it nearly
impossible for someone to walk off with it. The figure below shows a cable lock with a
number combination lock. With others, small keys are used to unlock the lock. If you grew up
using a bicycle lock, these will look really familiar.
If someone wants your laptop bad enough, they can break the case and dislodge your lock.
Having the lock in place will deter most people looking to make off with it though.
Media/Accessory Bay
Some portable computers have interchangeable drives built into a special media or
accessory bay. These drive bays are often able to accommodate the user’s choice of
Hard disk drive
Optical drives, such as rewritable DVD or combo DVD-ROM/CD-RW
Extra batteries
Some older systems might also have support for removable-media drives such as floppy,
Zip, or LS-120/240 SuperDisk drives, all of which are now obsolete.
Digitizing Tablet
A digitizing tablet or digitizer is another input device that uses a stylus. Available as an
external device, a digitizer uses touch screen technology and is usually at least the size of a
sheet of paper. The primary use of a digitizer is for creating computerized drawings; it is, for
this reason, also called a graphics tablet. A tablet PC is a laptop in which the display is an
integrated digitizer that swivels around so that it sits on top of the keyboard, concealing the
keyboard for drawing.
While a stylus would seem like the primary input device for this type of computer, expect an
integrated keyboard, if not an integrated pointing device, for users who find it handy to use a
mouse. These choices allow the user to use the mode that fits the current task, such as
writing a memo using the keyboard, surfing the Internet using a mouse, and creating
drawings using a stylus on the tablet.
Users who don’t understand the keys might accidentally press them and then think that their
laptop is malfunctioning. With this in mind, it’s important for users to understand what keys
are commonly available and how they work.
a) Volume setting
On the top row where the keys labeled F1-F12 are located, there are usually a couple of
keys (usually F8 and F9) that have icons on them that look like speakers. These keys can be
used to raise and lower the volume of the sound. If the icon is blue, you have to hold down
the Fn key. Otherwise, you do not need to use the Fn key to activate them. (As a matter of
fact, if you hold down the Fn key and use the F8 key you may be changing the location of
the display output as described in the dual display section.) Otherwise, consult the
documentation for the key to use in conjunction with the Fn key to adjust the volume. Most
laptops also include a mute button marked as such.
b) Screen brightness
Screen brightness can be controlled by using a couple of special function keys (usually F4
and F5) that have icons on them that look like suns with arrow pointing up and down,
respectively. They could also be located on the lower right on the keyboard.
c) Dual displays
When additional displays are connected to the laptop, (for example, a projector, or a second
monitor) holding down the Fn key and pressing the appropriate function key on the keyboard
will move the active screen from display to display and then to a setting where all monitors
have the same output. Normally, the F key that you press has an image of the monitor on it.
This is valuable when making a presentation or when you would like to direct the image to
the projector and/or the laptop screen. Some laptop keyboard shave theF1 key on the top
line marked with the icon of a laptop display and another screen. In this case, the key is
used to toggle between dual-display settings rather than the Fn and the F8 key.
Expansion Slots
Laptop computers can be upgraded by adding additional hardware. Some hardware can be
added internally to the computer and almost all laptop computers also have external
expansion slots that you can use to add additional components.
Laptops come with specialized, small form factor, expansion slots. The most common
expansion slots are those based on standards developed by the Personal Computer
Memory Card International Association (PCMCIA), an organization that creates standards
for laptop computer peripheral devices. All the standards described here support hot
swapping, meaning devices can be plugged in and removed while the computer is running.
A service called socket services running in the operating system on the laptop detects when
a card has been inserted. After socket services detect a card, another service, called card
services, assigns the proper resources to the device.
PC Card
The early standard developed by PCMCIA was referred to at first simply as the “PCMCIA”
interface. The credit-card sized cards that fit into this interface, sliding in from slots on the
side of a laptop, were called PCMCIA cards, but eventually the name changed to PC Card.
The earliest interface and cards used a 16-bit interface.
All PC cards use same connecting interface with 68 pins. All are 85.6 mm long and 54.0 mm
wide. The original standard was defined for both 5 volt and 3.3 volt cards. The 3.3 V cards
have a key on the side to protect them from being damaged by being put into a 5 V-only slot.
Some cards and some slots operate at both voltages as needed.
PC cards were of three types namely:
1. Type I: These cards are 3.3 mm thick. They are primarily used for adding RAM ROM
to a notebook PC. They accommodate 16-bit interface. A type I slot can hold one
type I card.
2. Type II: These cards are 5.0 mm thick. These cards are often used for modem, fax,
SCSI, and LAN cards. A type II slot can hold one type II card, or two type I cards.
3. Type III: Type III cards are 10.5mm thick, sufficiently large for portable disk drives.
They can also accommodate interface cards with full-size connectors that do not
require dongles (as is commonly required with type II interface cards). A type III slot
can hold one type III card, or a type I and type II card.
Express CardBus:
Laptops are currently moving to the most recent PCMCIA standard—ExpressCard, which
comes in two interfaces: the PCIe (PCI Express) interface, at 2.5 Gigabits per second, and
the USB 2.0 interface, at 480 Megabits per second. ExpressCard is incompatible with either
PC Card standard.
To begin with, the ExpressCard interface does not have actual pins, but instead has 26
contacts in a form referred to as a beam-on-blade connector. Although all ExpressCard
modules have the same number of contacts (also called pins), there are currently two sizes
of modules: Both are 75 mm long and 5 mm high, but they vary in width. The form factor
known as ExpressCard/34 is 34 mm wide, while ExpressCard/54 is 54 mm wide.
ExpressCard/34 modules will fit into ExpressCard/54 slots.
Laptop Displays
The display system is the primary component in the top half of the clamshell case. (The
wireless antenna often resides here too, and we’ll get to that in just a bit.) Much like all other
laptop components, the display is more or less a smaller version of its desktop counterpart.
What is unique to laptop displays, though, is that for some time, the technology used in them
was actually more advanced than what was commonly used in desktops. This is due to liquid
crystal display (LCD) technology.
Video Card
The video card in a laptop or desktop with an LCD monitor does the same thing a video card
supporting a CRT monitor would do. It’s responsible for generating and managing the image
sent to the screen. The big difference is that most LCD monitors are digital, meaning you
need a video card that puts out a digital image. Laptop manufacturers put video cards that
are compatible with the display in laptops, but with desktops it can get a bit confusing. The
figure below shows an ABIT video card, with a digital video interface (DVI) port on the right
and an analog (VGA) port on the left. The port in the middle is an S-video/composite video
port.
Backlight
LCD and LED displays do not produce light, so to generate brightness, they displays have a
backlight. A backlight is a small fluorescent lamp placed behind, above, or to the side of an
LCD display. The light from the lamp is diffused across the screen, producing brightness.
The typical laptop display uses a cold cathode fluorescent lamp (CCFL) as its backlight.
They’re generally about 8 inches long and slimmer than a pencil. Best of all, they generate
little heat, which is always a good thing to avoid with laptops. CCFL requires alternating
current (AC) voltage.
Exam Tip: LED-based and LCD-based displays require a backlight because crystals do not
emit light on their own. A recent change in technology is that more laptop displays use LED-
based displays instead of LCD-based displays. LED backlights tend to fail less than CCFLs.
Additionally, LEDs do not require inverters.
Inverter
The only problem with fluorescent lighting, and LCD backlights in particular, is that they
require fairly high-voltage, high-frequency energy. Another component is needed to provide
the right kind of energy, and that’s the inverter.
The inverter is a small circuit board installed behind the LCD panel that takes DC current
and inverts it to AC for the backlight. If you are having problems with flickering screens or
dimness, it’s more likely that the inverter is the problem and not the backlight itself.
There are two things to keep in mind if you are going to replace an inverter. First, they store
and convert energy, so they have the potential to discharge that energy. To an
inexperienced technician, they can be dangerous. Second, make sure the replacement
Screen
The screen on a laptop does what you might expect—it produces the image that you see.
The overall quality of the picture depends a lot on the quality of the screen and the
technology your laptop uses. Current options include LCD, LED, OLED, and plasma.
Exam Tip: What type of laptop display requires an inverter and what is the purpose of an
inverter?
LCD
Liquid Crystal Display is thin displays. LCD is a special flat panel that can block light, or
allow it to pass. The panel is made up of segments with each block filled with liquid crystals.
The colour and transparency of these blocks can be changed by increasing or reducing the
electrical current. LCD crystals do not produce their own light, so an external light source like
a florescent bulb is needed to create an image.
LED
LED use the same technology as an LCD screen, but instead of being illuminated by a
florescent bulb from behind, they are lit by an array of LEDs (light emitting diodes). These
are far more efficient and smaller in size, meaning the screen can be narrower.
LED can be broken up into two further major categories Direct (Back-lit) LED and Edge-lit
LED:
a. Direct LED
These displays are backlit by an array of LEDs directly behind the screen. This enables
focused lighting areas – meaning specific cells of brightness and darkness can be displayed
more effectively.
b. Edge-lit LED
OLED
OLED is massive leap forward in screen technology. Unlike its name suggests, OLED is
nothing like LED. OLED stands for ‘Organic Light Emitting Diode’ and uses ‘organic’
materials like carbon to create light when supplied directly by an electric current. Unlike
LED/LCD screens, an OLED TV doesn’t require a backlight to illuminate the set area.
Without this restriction of an external light source, OLED screens can be super thin and
crucially, flexible.
As the individual areas can be lit up directly and not via an external backlight, the colours
and contrasts are much better on OLED TV’s. On the whole, OLED is thinner, more flexible,
faster at processing images, creates deeper colours and more crisp in contrast. It is,
however, still very expensive and will not be seen on consumer TV’s at an ‘affordable price’
for at least another year.
WiFi Antenna
The vast majority of laptops produced today include built-in WiFi capabilities. Considering
how popular wireless networking is today, it only makes sense to include 802.11 functionality
without needing to use an expansion card. With laptops that include built-in WiFi, the
wireless antenna is generally run up through the upper half of the clamshell case. This is to
get the antenna higher up and improve signal reception. The wiring will run down the side of
the display, through the hinge of the laptop case, and plug in somewhere on the
motherboard.
The WiFi antenna won’t affect what you see on the screen, but if you start digging around in
the display, know that you’ll likely be messing with your wireless capabilities as well.
Laptops come with two source of electrical input. For portability, a laptop comes with a
battery that will support it when not plugged into a power outlet. These rechargeable
batteries have a battery life without recharging in the range of 5 to 8 hours at best and tend
to last only a few years. They are also quite expensive.
You must recharge laptop system batteries when they run down, and this is the job of the AC
adapter, which plugs into a regular electrical outlet. The adapter recharges the battery while
the portable is operating. However, if you are not near a wall outlet when the battery’s power
fades, you will not be able to work until you replace the battery with a fully charged one or
until AC power is available again.
DC Controller
Most portable computers include a DC controller, which monitors and regulates power
usage. The features of DC controllers vary by manufacturer, but typically, they provide short-
circuit protection, give “low battery” warnings, and can be configured to automatically shut
down the computer when the power is low.
AC Adapter
The AC adapter (the “brick” or “wall wart”) is your laptop’s power supply that is plugged into
an AC power source. Like the power supply in a desktop PC, it converts AC power to DC
power. Figure 5-7 shows an AC adapter for a laptop. The AC adapter is also responsible for
recharging the battery. If you must replace an external adapter, simply unplug it and attach a
new one from the laptop manufacturer that matches the specifications of the one it is
replacing. Furthermore, since AC adapters have varying output voltages; never use an AC
adapter with any laptop other than the one it was made for. Previously, laptop power
supplies were set to accept only one input power voltage. Now, many laptop AC adapters
act as auto-switching power supplies, detecting the incoming voltage, and switching to
accept either 120 or 240 VAC.
A laptop
AC
Adapter
Exam Tip: The primary tools you need when working with laptop computers are
screwdrivers, a plastic wedge, and ESD protection equipment such as an ESD wrist strap.
The list of components that may be replaceable could include input devices such as the
keyboard and Touchpad; storage devices, including hard drives and optical drives; core
components such as memory, the processor, and the motherboard; expansion options,
including wireless cards and mini-PCIe cards; and integrated components such as the
screen, plastics, speakers, battery, and DC jack. Again, depending on the make and model
of the laptop you’re working on, the list of replaceable components might be longer or
shorter.
Before starting off any of the exercises, ensure to check your system’s documentation. It is
advisable that you first know the model you are working with.
Step by Step
1. Turn off the computer.
2. Disconnect the computer and any peripherals from their power sources, and remove any
installed batteries.
3. Locate the hard drive door and remove the screw holding it in place.
4. Lift the hard drive door until it clicks.
5. Slide the hard drive out to remove it.
6. Remove the two screws holding the hard drive to the hard drive door.
Step by Step
1. Turn off the computer.
2. Disconnect the computer and any peripherals from their power sources, and remove any
installed batteries.
3. Remove the screws holding the memory door in place.
7. To release the keyboard, use a small flat-edged screwdriver to pry up on its right edge,
near the blank key.
8. Lift the keyboard up about an inch and rotate it forward so the keys are facing on the
palm rest. Don’t pull the keyboard too far or you might damage the connector cable.
9. Pull up on the keyboard connector to disconnect it from the keyboard interface connector
on the motherboard.
Exam Tip: Describe the purpose of special purpose keys; Differentiate docking stations and
port replicators; Describe approaches to laptop physical security.
1) What are the two broad categories of portable computers? Select the two correct
answers.
A. LCDs
B. Handhelds
C. Laptops
D. PC Cards
2) A laptop typically opens like what common business accessory?
A. Pager
B. PDA
C. Briefcase
D. Headphone
3) What built-in component allows laptop use for short periods without an outside
power source?
A. Keyboard
B. Touchpad
C. Pointing stick
D. Battery
4) What is the common name for a handheld computer used to keep track of
appointments and to record addresses, phone numbers, and notes?
A. Notebook
B. Laptop
C. PDA
D. Pocket PC
5) A laptop component used to convert the DC power from the power adapter or
battery to the AC power required by the LCD display.
A. Inverter
B. Converter
C. Power switch
D. Generator
6) The most common laptop memory module, which is a scaled-down version of a
type of full-sized PC memory module.
A. SORIMM
B. MicroDIMM
C. SODIMM
D. DIMM
7) Which is a common width of a laptop hard drive?
A. 2.5"
B. 5"
C. 3.5"
D. 1"
8) Which of the following allows you to attach nearly any type of desktop component
to a portable computer?
A. Port replicator
B. Enhanced port replicator
C. Extended port replicator
D. Docking station
9) A compartment in some laptops, which can hold one media device (secondary
hard drive, optical drive, floppy drive, etc.) that you can swap with another device.
Select the two names for this type of compartment from the following list.
A. Slot
B. Media bay
C. USB port
OPERATIONAL PROCEDURES
Objectives
Introduction
Creating a Safe Workplace
Understanding Environmental Controls
Managing the Physical Environment
Handling and Disposing of Computer Equipment
Demonstrating Professionalism
Handling Specific Situations
Introduction
Years ago, only the most expert users dared crack the case on a computer. Oftentimes
repairing the system meant using a soldering iron. Today, thanks to the cheap cost of parts,
repair is not quite as involved. Regardless of your skill or intent, if you’re going to be inside a
computer, you always need to be aware of safety issues. There’s no sense in getting
yourself hurt or killed—literally.
The most important aspect of computers that you should be aware of is that they not only
use electricity, they also store electrical charge after they’re turned off. This makes the
power supply and the monitor pretty much off-limits to anyone but a repair person trained
specifically for those devices. In addition, the computer’s processor and various parts of the
printer run at extremely high temperatures, and you can get burned if you try to handle them
immediately after they’ve been in operation.
Those are just two general safety measures that should concern you, safety procedures and
professionalism and communication. There are plenty more. When discussing safety issues
with regard to PCs, let’s break them down into four general areas:
Computer components
Electrostatic discharge
Electromagnetic interference
Natural elements
Computer Components
As mentioned earlier, computers use electricity. And as you’re probably aware, electricity
can hurt or kill you. The first rule when working inside a computer is to always make sure it’s
powered off. So if you have to open the computer to inspect or replace parts (as you will with
most repairs), be sure to turn off the machine before you begin. Leaving it plugged in is
usually fine, and many times actually preferred because it grounds the equipment and can
help prevent electrostatic discharge.
There’s one exception to the power-off rule: you don’t have to power off the computer when
working with hot-swappable parts, which are designed to be unplugged and plugged back in
when the computer is on. Most of these components have an externally accessible interface
The two biggest dangers with power supplies are burning yourself and electrocuting yourself.
These risks usually go hand in hand. If you touch a bare wire that is carrying current, you
could get electrocuted. A large-enough current passing through you can cause severe burns.
It can also cause your heart to stop, your muscles to seize, and your brain to stop
functioning. In short, it can kill you. Electricity always finds the best path to ground. And
because the human body is basically a bag of saltwater (an excellent conductor of
electricity), electricity will use us as a conductor if we are grounded.
Although it is possible to open a power supply to work on it, doing so is not recommended.
Power supplies contain several capacitors that can hold lethal charges long after they have
been unplugged! It is extremely dangerous to open the case of a power supply. Besides,
power supplies are pretty cheap. It would probably cost less to replace one than to try to fix
it, and this approach would be much safer.
If you ever have to work on a power supply, for safety’s sake you should discharge all
capacitors within it. To do this, connect a resistor across the leads of the capacitor with a
rating of 3 watts or more and a resistance of 100 ohms ( ) per volt. For example, to
discharge a 225-volt capacitor, you would use a 22.5k resistor (225 volts times 100 =
22,500 , or 22.5k ).
The Monitor
Other than the power supply, the most dangerous component to try to repair is a cathode-ray
tube (CRT) monitor. In fact, we recommend that you do not try to repair monitors of any kind.
To avoid the extremely hazardous environment contained inside the monitor—it can retain a
high-voltage charge for hours after it’s been turned off—take it to a certified monitor
technician or television repair shop. The repair shop or certified technician will know the
proper procedures for discharging the monitor, which involve attaching a resistor to the
flyback transformer’s charging capacitor to release the high-voltage electrical charge that
builds up during use.
They will also be able to determine whether the monitor can be repaired or needs to be
replaced. Remember, the monitor works in its own extremely protected environment (the
monitor case) and may not respond well to your desire to try to open it.
Even though we recommend not repairing monitors, the A+ exam tests your knowledge of
the safety practices to use when you need to do so. If you have to open a monitor, you must
first discharge the high-voltage charge on it by using a high-voltage probe. This probe has a
very large needle, a gauge that indicates volts, and a wire with an alligator clip. Attach the
alligator clip to a ground (usually the round pin on the power cord). Slip the probe needle
underneath the high-voltage cup on the monitor. You will see the gauge spike to around
15,000 volts and slowly reduce to 0 (zero). When it reaches zero, you may remove the high-
voltage probe and service the high-voltage components of the monitor.
NB: Do not use an ESD strap when discharging the monitor; doing so can lead to a fatal
electric shock.
The Case
One component people frequently overlook is the case. Cases are generally made of metal,
and some computer cases have very sharp edges inside, so be careful when handling them.
You can, for example, cut yourself by jamming your fingers between the case and the frame
when you try to force the case back on. Also of particular interest are drive bays. Countless
technicians have scraped or cut their hands on drive bays when trying in vain to plug a drive
cable into the motherboard. Particularly sharp edges can be covered with duct tape—just
make sure you’re covering only metal and nothing with electrical components on it.
The Printer
Should you ever attempt to repair a printer, here are some things to watch out for:
When handling a toner cartridge from a laser printer or page printer, do not turn it
upside down. You will find yourself spending more time cleaning the printer and the
surrounding area than fixing the printer.
Do not put any objects into the feeding system (in an attempt to clear the path) when
the printer is running.
Laser printers generate a laser that is hazardous to your eyes. Do not look directly
into the source of the laser.
If it’s an inkjet printer, do not try to blow in the ink cartridge to clear a clogged
opening—that is, unless you like the taste of ink.
Some parts of a laser printer (such as the EP cartridge) will be damaged if you touch
them. Your skin produces oils and has a small surface layer of dead skin cells. These
substances can collect on the delicate surface of the EP cartridge and cause
malfunctions. Bottom line: Keep your fingers out of places they don’t belong!
Laser printers can get extremely hot. Don’t burn yourself on internal components.
Electrostatic Discharge
So far we’ve talked about how electricity can hurt people, but it can also pose safety issues
for computer components. One of the biggest concerns for components is electrostatic
discharge (ESD). For the most part, ESD won’t do serious damage to a person other than
provide a little shock. But little amounts of ESD can cause serious damage to computer
components, and that damage can manifest itself by causing computers to hang or reboot or
fail to boot at all. ESD happens when two objects of dissimilar charge come in contact with
one another. The two objects exchange electrons in order to standardize the electrostatic
charge between them. This charge can, and often does, damage electronic components.
Note: CPU chips and memory chips are particularly sensitive to ESD. Be extremely cautious
when handling them.
There are measures you can implement to help contain the effects of ESD. The first and
easiest item to implement is the antistatic wrist strap, also referred to as an ESD strap. We
will look at the antistatic wrist strap, as well as other ESD prevention tools, in the following
sections.
Never wear an ESD strap if you’re working inside a monitor or inside a power supply. If you
wear one while working on the inside of these components, you increase the chance of
getting a lethal shock.
Unlike antistatic mats, antistatic bags do not “drain” the charges away, and they should
never be used in place of an antistatic mat.
Self-grounding is not as effective as using proper anti-ESD gear, but it makes up for that with
its simplicity. To self-ground, make sure the computer is turned off but plugged in. Then,
touch an exposed (but not hot or sharp!) metal part of the case. That will drain electrical
charge from you. Better yet is if you can maintain constant contact with that metal part. That
should keep you at the same bias as the case. Yes, it can be rather challenging to work
inside a computer one-handed, but it can be done.
Additional Methods
Another preventive measure you can take is to maintain the relative humidity at around 50
percent. Be careful not to increase the humidity too far—to the point where moisture begins
to condense on the equipment. Also, use antistatic spray, which is available commercially, to
reduce static buildup on clothing and carpets.
Electromagnetic Interference
When compared to the other dangers we’ve discussed in this chapter, electromagnetic
interference (EMI), also known as radio frequency interference (RFI), is by far the least
dangerous. EMI really poses no threats to you in terms of bodily harm. What it can do is
make your equipment or network malfunction.
Network devices
The popularity of wireless networking devices has introduced the possibility of interference.
The most popular wireless networking standards, 802.11b/g/n, use the 2.4GHz range for
transmissions. Bluetooth devices happen to use the same frequency. In theory, they won’t
interfere with each other because they use different modulation techniques. In practice,
interference between the two types of devices can happen.
Magnets
Magnets work by generating an electromagnetic field. It might make sense, then, that this
field could cause electromagnetic interference. For the most part, you don’t need to worry
about this unless you have huge magnets at work. Do note, however, that many motors use
magnets, which can cause interference. For example, one of our friends used to have his
computer on the opposite side of a wall from his refrigerator. Whenever the compressor
kicked in, his monitor display would become wavy and unreadable. It was time to move his
home office. Another common culprit is desk fans. Put a desk fan next to a CRT and turn the
fan on. What happens to the display? It will become wavy. This is another example of EMI.
Cordless phones
Cordless phones can operate at a variety of frequencies. Some of the more common ones
are 900MHz, 1.9GHz, 2.4GHz, and 5.8GHz. Many of these are common ranges for
computer equipment to operate in as well.
Microwave ovens
Microwave ovens are convenient devices to heat food and beverages. The radiation they
generate is typically in the 2.45GHz range, although it can vary slightly. If a microwave is
being used near your computer, you’ll often see a distorted display just as if a fan or motor
Natural Elements
Computers should always be operated in cool environments away from direct sunlight and
water sources. This is also true when you’re working on computers. We know that heat is an
enemy of electrical components. Dirt and dust act as great insulators, trapping heat inside
components. When components run hotter than they should, they have a greater chance of
breaking down faster.
To ensure your personal safety, here are some important techniques to always consider
before moving equipment:
The first thing to always check is that it’s unplugged. There’s nothing worse (and
potentially more dangerous) than getting yanked because you’re still tethered.
Securely tie the power cord to the device, or remove it altogether if possible.
Remove any loose jewelry and secure long hair or neckties.
Lift with your legs, not your back (bend at the knees when picking something up, not
at the waist).
Do not twist when lifting.
Maintain the natural curves of the back and spine when lifting.
Keep objects close to your body and at waist level.
Push rather than pull if possible.
The muscles in the lower back aren’t nearly as strong as those in the legs or other parts of
the body. Whenever lifting, you want to reduce the strain on those lower-back muscles as
much as possible. If you want, use a back belt or brace to help you maintain the proper
position while lifting.
Most of the time, computers can be opened and devices removed with nothing more than a
simple screwdriver. But if you do a lot of work on PCs, you’ll definitely want to have
additional tools on hand.
Computer toolkits are readily available on the Internet or at any electronics store. They come
in versions from inexpensive (under $10) kits that have around 10 pieces to kits that cost
several hundred dollars and have more tools than you will probably ever need. Below is an
example of a basic 11-piece PC toolkit. All of these tools come in a handy zippered case so
it’s hard to lose them.
Looking at the figure, from left to right you have two nut drivers (1⁄4″ and 3⁄16″), a 1⁄8″ flat
screwdriver, a #0 Phillips screwdriver, a T-15 Torx driver, a screw tube, an integrated circuit
(IC) extractor, tweezers, a three-claw retriever, a #1 Phillips screwdriver, and a 3⁄16″ flat
screwdriver. A favorite of ours is the three-claw retriever because screws like to fall and hide
in tiny places. While most of these tools are incredibly useful, an IC extractor probably won’t
be. In today’s environment, it’s rare to find an IC that you can extract, much less find a
reason to extract one.
Cables are a common cause of tripping. If at all possible, run cables through drop ceilings or
through conduits to keep them out of the way. If you need to lay a cable through a trafficked
area, use a cable floor guard to keep the cables in place and safe from crushing. Floor
guards come in a variety of lengths and sizes (for just a few cables or for a lot of cables).
Another useful tool to keep cables under control is a cable tie, like the one shown in below.
It’s simply a plastic tie that holds two or more cables together. They come in different sizes
and colors so you’re bound to find one that suits your needs.
Atmospheric conditions that you need to be aware of include areas with high static electricity
or excessive humidity. This is especially important for preventing electrostatic discharge, as
we’ve already discussed. Finally, be aware of high-voltage areas. Computers do need
electricity to run but only in measured amounts. Running or fixing computers in high-voltage
areas can cause problems for the electrical components and can cause problems for you if
something should go wrong.
It is estimated that more than 25 percent of all the lead (a poisonous substance) in landfills
today is a result of consumer electronics components. Because consumer electronics
(televisions, VCRs, stereos) contain hazardous substances, many states require that they be
disposed of as hazardous waste. Computers are no exception. Monitors contain several
carcinogens and phosphors as well as mercury and lead. The computer itself may contain
several lubricants and chemicals as well as lead. Printers contain plastics and chemicals
such as toners and inks that are also hazardous. All of these items should be disposed of
properly.
On the flip side, the environment is also hazardous to computers. Temperature, humidity,
water, and air quality can have dramatic effects on a computer’s performance. And we know
that computers require electricity; too much or too little can be a problem.
With all of these potential issues, you might find yourself wondering, “Can’t we all just get
along?” In the following sections, we will talk about how to make our computers and the
environment coexist as peacefully as possible.
Some of our computers sit in the same dark, dusty corner for their entire lives. Other
computers are carried around, thrown into bags, and occasionally dropped. Either way, the
physical environment in which our computers exist can have a major effect on how long they
last. It’s smart to periodically inspect the physical environment to ensure that there are no
working hazards. Routinely cleaning components will also extend their useful life, and so will
ensuring that the power supplying them is maintained as well.
Maintaining Power
As electronics, computers need a power source. Laptops can free you from your power cord
leash for a while, but only temporarily. Most people realize that having too much power (a
power surge) is a bad thing because it can fry electronic components. Having too little
power, such as when a blackout occurs, can also wreak havoc on electrical circuits. Power
blackouts are generally easy to detect. Power sags without a complete loss, called a
brownout, are also very damaging to electrical components but oftentimes go unnoticed.
Devices that actually attempt to keep power surges at bay are called surge protectors. They
often look just like a power strip so it’s easy to mistake them for each other, but protectors
are more expensive. They have a fuse inside them that is designed to blow if it receives too
much current and not transfer the current to the devices plugged into it. Surge protectors
may also have plug-ins for RJ-11 (phone), RJ-45 (Ethernet), and BNC (coaxial cable)
connectors as well.
Surge protector
Cleaning Systems
The cleanliness of a computer is extremely important. Buildup of dust, dirt, and oils can
prevent various mechanical parts of a computer from operating. Cleaning them with the right
cleaning compounds is equally important. Using the wrong compounds can leave residue
behind that is more harmful than the dirt you are trying to remove.
Most computer cases and monitor cases can be cleaned by using mild soapy water on a
clean, lint-free cloth. Do not use any kind of solvent-based cleaner on either monitor or LCD
screens because doing so can cause discoloration and damage to the screen surface. Most
often, a simple dusting with a damp cloth (moistened with water) will suffice. Make sure the
power is off before you put anything wet near a computer. Dampen (don’t soak) a cloth in
mild soap solution and wipe the dirt and dust from the case. Then wipe the moisture from the
case with a dry, lint-free cloth. Anything with a plastic or metal case can be cleaned in this
manner.
If you spill anything on a keyboard, you can clean it by soaking it in distilled, demineralized
water and drying it off. The extra minerals and impurities have been removed from this type
of water, so it will not leave any traces of residue that might interfere with the proper
operation of the keyboard after cleaning. The same holds true for the keyboard’s cable and
its connector.
The electronic connectors of computer equipment, on the other hand, should never touch
water. Instead, use a swab moistened in distilled, denatured isopropyl alcohol (also known
as electronics or contact cleaner and found in electronics stores) to clean contacts. Doing so
will take oxidation off the copper contacts.
One unique challenge when cleaning printers is spilled toner. It sticks to everything and
should not be inhaled. Use an electronics vacuum that is designed specifically to pick up
toner. A normal vacuum’s filter isn’t fine enough to catch all the particles, so the toner may
be circulated into the air. Normal electronics vacuums may melt the toner instead of picking
it up. If you get toner on your clothes, use a magnet to get it out (toner is half iron).
A computer vacuum
Each piece of computer equipment you purchase comes with a manual. In the manual are
detailed instructions on the proper handling and use of that component. In addition, many
manuals give information on how to open the device for maintenance or on whether you
should even open the device at all. Paper manuals should never be throw away, but should
instead be kept for reference. You can always look up information on the Internet as well, but
MSDSs are typically associated with hazardous chemicals. Indeed, chemicals do not ship
without them. MSDSs are not intended for consumer use; rather, they’re made for
employees or emergency workers who are consistently exposed to the risks of the particular
product.
NB: If you are interested in searching for free MSDSs, several websites are available, such
as www.msds.com. Many manufacturers of components will also provide MSDSs on their
websites.
Computers contain small amounts of hazardous substances. Some countries are exploring
the option of recycling electrical machines, but most have still not enacted appropriate
measures to enforce their proper disposal. However, we can do a few things as consumers
and caretakers of our environment to promote the proper disposal of computer equipment:
Check with the manufacturer. Some manufacturers will take back outdated
equipment for parts (and may even pay you for them).
Properly dispose of solvents or cleaners (as well as their containers) used with
computers at a local hazardous waste disposal facility.
Disassemble the machine and reuse the parts that are good.
Check out businesses that can melt down the components for the lead or gold
plating.
Contact the Environmental Protection Agency (EPA) for a list of local or regional
waste disposal sites that accept used computer equipment. The EPA’s web address
is www.epa.gov.
Check with the EPA to see if what you are disposing of has a Material Safety Data
Sheet (MSDS). These sheets contain information about the toxicity of a product and
whether it can be disposed of in the trash. They also contain lethal-dose information.
Check with local nonprofit or education organizations that may be interested in using
the equipment.
Check out the Internet for possible waste disposal sites. The table below lists a few
websites we came across that deal with disposal of used computer equipment. A
quick web search will likely locate some in your area.
RE-PC http://www.repc.com
Following the general rule of thumb of recycling your computer components and
consumables is a good way to go. In the following sections, we’ll look at three classifications
of computer-related components and proper disposal procedures for each.
Batteries
Millions of batteries are purchased annually worldwide. Batteries contain several heavy
metals and other toxic ingredients, including alkaline, mercury, lead acid, nickel cadmium,
and nickel metal hydride. When batteries are thrown away and deposited into landfills, the
heavy metals inside them will find their way into the ground. From there, they can pollute
water sources and eventually find their way into the supply of drinking water.
NB: Never burn a battery to destroy it. That will cause the battery to explode, which could
result in serious injury.
There are five types of batteries most commonly associated with computers and handheld
electronic devices: alkaline, nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion,
and button cell.
Alkaline batteries: Alkaline batteries have been incredibly popular portable batteries for
several decades now. Before 1984, one of the major ingredients in this type of battery was
mercury, which is highly toxic to the environment. In 1984, battery companies began
reduction of mercury levels in batteries, and in 1996 mercury was outlawed in alkaline
batteries in the United States. Still, it’s strongly recommended that you recycle these
batteries at a recycling center. Although newer alkaline batteries contain less mercury than
their predecessors, they are still made of metals and other toxins that contaminate the air
and soil.
Nickel cadmium (NiCd): Nickel cadmium is a popular format for rechargeable batteries. As
their name indicates, they contain high levels of nickel and cadmium. Although nickel is only
semitoxic, cadmium is highly toxic. These types of batteries are categorized by the EPA as
hazardous waste and should be recycled.
Nickel metal hydride (NiMH) and lithium ion: Laptop batteries are commonly made with
NiMH and lithium ion. Unlike the previous types of batteries we have discussed, these are
not considered hazardous waste, and there are no regulations on recycling them. However,
these batteries do contain elements that can be recycled, so it’s still a good idea to go that
route.
Button cell: These batteries are named because they look like a button. They’re commonly
used in calculators and watches as well as portable computers. They often contain mercury
and silver (and are environmental hazards due to the mercury) and need to be recycled.
You may have noticed a theme regarding disposal of batteries: recycling. Many people just
throw batteries in the trash and don’t think twice about it. However, there are several laws in
Display Devices
Computer monitors (CRT monitors, not LCD ones) are big and bulky. Monitors have
capacitors in them that are capable of retaining a lethal electric charge after they’ve been
unplugged. Most CRT monitors contain high amounts of lead. Most monitors contain several
pounds of lead, in fact. Lead is very dangerous to humans and the environment and must be
dealt with carefully. Other harmful elements found in CRTs include arsenic, beryllium,
cadmium, chromium, mercury, nickel, and zinc.
If you have a monitor to dispose of, contact a computer recycling firm. It’s best to let
professional recyclers handle the monitor for you.
Cans are generally made from aluminum or other metals, which are not biodegradable. It’s
best to always recycle these materials. If the cans were used to hold a chemical solvent or
otherwise hazardous material, contact a hazardous materials disposal center instead of a
recycling center.
Demonstrating Professionalism
You could probably break down professionalism a hundred different ways. For the A+ 220-
801 exam, we’re going to break it down into three parts: communication, behavior, and
dealing with prohibited content.
We currently are living in a casual society. The problem with casual is that perhaps our
society is becoming too casual. New technicians sometimes fail to appreciate that customers
equate casual clothing with a casual attitude. You might think you’re just fixing somebody’s
computer, but you’re doing much more than that. You are saving precious family photos.
You are keeping a small business in operation. This is serious stuff, and nobody wants an
unclean, slovenly person doing these important jobs. Take a look at the figure below. The
technician is casually dressed to hang with his buddies.
If you ran a small business and your primary file server died, leaving 15 employees with
nothing to do, how would you feel about a technician coming into your office looking like
this? I hope your answer would be “not too confident.” Every company has some form of
dress code for technicians. If the technician came in dressed, in a fairly typical example, with
a company polo shirt, khaki pants, and dark shoes (trust me on that score). Please also note
that both his shirt and his pants are wrinkle free.
Communication includes listening to what the user or manager or developer is telling you
and making certain that you understand completely: Approximately half of all communication
should be listening. Even though a user or customer may not fully understand the
terminology or concepts, that doesn’t mean they don’t have a real problem that needs
addressing. You must, therefore, be skilled at not only listening but also translating.
On the surface, dealing with prohibited content or activity might seem like a strange fit here,
but it is definitely a part of being a professional. It’s a part of dealing with problems in
general. You’ll come across problems like this more often than you will probably want to, and
everyone involved will be noticing your conduct as well as how you deal with the problem.
Dealing with it fairly and appropriately in a timely fashion will strengthen your standing in the
eyes of others.
For example, a customer may complain that their CD-ROM drive doesn’t work. What they fail
to mention is that it has never worked and that they installed it. On examining the machine,
you realize that they mounted it with screws that are too long and that these prevent the tray
from ejecting properly.
Have the customer reproduce the error. The most important part of this step is to have the
customer show you what the problem is. The best method we’ve seen of doing this is to ask,
“Show me what ‘not working’ looks like.” That way, you see the conditions and methods
under which the problem occurs. The problem may be a simple matter of doing an operation
incorrectly or performing the operation in the wrong order. During this step, you have the
opportunity to observe how the problem occurs, so pay attention.
Identify recent changes. The user can give you vital information. The most important
question is, “What changed?” Problems don’t usually come out of nowhere. Was a new
piece of hardware or software added? Did the user drop some equipment? Was there a
power outage or a storm? These are the types of questions you can ask a user in trying to
find out what is different. If nothing changed, at least outwardly, then what was going on at
the time of failure? Can the problem be reproduced? Is there a workaround? The point here
is to ask as many questions as you need to in order to pinpoint the source of the trouble.
Use the collected information. Once the problem or problems have been clearly identified,
your next step is to isolate possible causes. If the problem cannot be clearly identified, then
further tests will be necessary. A common technique for hardware and software problems
alike is to strip the system down to bare-bones basics. In a hardware situation, this could
mean removing all interface cards except those absolutely required for the system to
operate. In a software situation, this may mean disabling elements within Device Manager.
Generally, then, you can gradually rebuild the system toward the point where the trouble
started. When you reintroduce a component and the problem reappears, you know that
component is the one causing the problem.
Generally, then, you can gradually rebuild the system toward the point where the trouble
started. When you reintroduce a component and the problem reappears, you know that
component is the one causing the problem.
Punctuality
Punctuality is important and should be a part of your planning process: If you tell the
customer you will be there at 10:30 a.m., you need to make every attempt to be there at that
time. If you arrive late, you have given them false hope that the problem will be solved by a
set time. That can lead to anger if you arrive late because it can appear that you are not
taking the problem seriously. Punctuality continues to be important throughout the service
call and does not end with your arrival. If you need to leave to get parts and return, tell the
customer when you will be back, and be there at that time. If for some reason you cannot
return at the expected time, alert the customer and tell them when you can return.
Accountability
When problems occur, you need to be accountable for them and not attempt to pass the
buck to someone else. For example, suppose you are called to a site to put a larger hard
drive into a server. While performing this operation, you inadvertently scrape your feet
across the carpeted floor, build up energy, and zap the memory in the server. Some
technicians would pretend the electrostatic discharge (ESD) never happened, put the new
hard drive in, and then act completely baffled by the fact that problems unrelated to the hard
drive are occurring.
An accountable technician would explain to the customer exactly what happened and
suggest ways of proceeding from that point—addressing and solving the problem as quickly
and efficiently as possible. Accountability also means you do what you say you’re going to
do and ensure that expectations are set and met.
Along those same lines, if a user asks how much longer the server will be down and you
respond that it will up in 5 minutes only to have it down for 5 more hours, the result can be
resentment and possibly anger. When estimating downtime, always allow for more time than
you think you will need just in case other problems occur. If you greatly underestimate the
time, always inform the affected parties and give them a new time estimate. To use an
analogy that will put it in perspective, if you take your car to get an oil change and the
counter clerk tells you it will be “about 15 minutes,” the last thing you want is to be still sitting
there four hours later.
Some technicians fix a problem and then develop an “I hope that worked and I never hear
from them again” attitude. Calling your customer back (or dropping by their desk) to ensure
that everything is still working right is an amazing way to quickly build credibility and rapport.
Flexibility
Flexibility is another equally important trait for a service technician. While it is important that
you respond to service calls promptly and close them (solve them) as quickly as you can,
you must also be flexible. If a customer cannot have you on site until the afternoon, you must
make your best effort to work them into your schedule around the time most convenient for
them. Likewise, if you are called to a site to solve a problem and the customer brings
another problem to your attention while you are there, you should make every attempt to
address that problem as well. Under no circumstances should you ever give a customer the
cold shoulder or not respond to additional problems because they were not on an initial
incident report.
It’s also important that you are flexible in dealing with challenging or difficult situations. When
someone’s computer has failed, they likely aren’t going to be in a good mood and that can
make them a “difficult customer” to deal with. In situations like these, keep in mind the
following principles:
Don’t minimize their problems. While the customer’s problem might seem trivial to you, it
isn’t to them. Treat the problem as seriously as they’re treating it. Keep in mind that facial
expressions and body language are also important. If someone tells you their problem and
you look at them like they’re delusional, they’re probably going to pick up on that, which can
make the situation worse.
Avoid being judgmental. Don’t blame or criticize. As stated earlier, just focus on what
needs to happen to fix the problem. Accusing the user of causing the problem does not build
rapport.
Confidentiality
The goal of confidentiality is to prevent or minimize unauthorized access to files and folders
and disclosure of data and information. In many instances, laws and regulations require
confidentiality for specific information. For example, Social Security records, payroll and
employee records, medical records, and corporate information are high-value assets. This
information could create liability issues or embarrassment if it fell into the wrong hands. Over
the last few years, there have been a number of cases in which bank account and credit
card numbers were published on the Internet. The cost of these types of breaches of
confidentiality far exceed the actual losses from the misuse of this information.
Respect
In addition to respecting the customer as an individual, you must also respect the tangibles
that are important to the customer. While you may look at a monitor they are using as an
outdated piece of equipment that should be scrapped, the business owners may see it as a
gift from their children when they first started their business.
Treat the customers’ property as if it had value, and you will win their respect. Their property
includes the system you are working on (laptop/desktop computer, monitor, peripherals, and
the like) as well as other items associated with their business. Do not use their telephone to
make personal calls or call other customers while you are at this site. Do not use their
printers or other equipment, unless it is in a role associated with the problem you’ve been
summoned to fix.
Task: The class is to share experiences where they felt respected or disrespected as
customers.
Example:
While driving home one night, the ‘low tire pressure’ dashboard light came on. Upon
inspection, I could hear the right-rear tire hissing. I drove to a tire store and explained the
Privacy
While there is some overlap between confidentiality and privacy, privacy is an area of
computing that is becoming considerably more regulated. As a computing professional, you
must stay current with applicable laws because you’re often one of the primary agents
expected to ensure compliance.
Although the laws provide a minimal level of privacy, you should go out of your way to
respect the privacy of your users beyond what the law establishes. If you discover
information about a user that you should not be privy to, you should not share it with anyone,
and you should alert the customers that their data is accessible and encourage them—if
applicable—to remedy the situation.
What is on the policy depends on the company you work for. Generally speaking, if
something violates an existing federal or local law, it probably isn’t appropriate for your
network either. Many companies also have strict policies against the possession of
pornographic or hate-related materials on company property. Some either go further than
that, banning any personal files such as downloaded music or movies on work computers.
Regardless of what is to be on your policy, always ensure that you have buy-in from very
senior management so that the policy will be considered valid. Here are some specific
examples of content that might be prohibited:
Adult content
Content that advocates against an individual, group, or organization
Unlicensed copyrighted material
Content related to drug, alcohol, tobacco, or gambling
Content about hacking, cracking, or other illegal computer activity
A good policy will also contain the action steps to be taken if prohibited content or activity is
spotted. For example, what should you do if you find porn on someone’s work laptop?
The policy should explicitly outline the punishment for performing specific actions or
possessing specific content. The appropriate penalty may very well be based upon the type
of content found. Something that is deemed mildly offensive might result in a verbal or
written warning the first time and a more severe sentence the second time. If your company
has a zero tolerance policy, then employees may be terminated and possibly subject to legal
action.
Finally, after the policy has been established, it’s critical to ensure that all employees are
aware of it and have the proper training. In fact, it’s highly recommended that you have all
employees sign a disclosure saying they have read and understand the policy and that the
signed document be kept in their human resources file. Many companies also require that
employees review the policy yearly and resign the affidavit as well.
If you have your policy in place, then this part should be relatively scripted. It might not be
easy to deal with, but the steps you should take should be outlined for you. Because we’re
talking about professionalism, now is a good time to remind you that people will be looking at
your reaction as well as your actions. If you see prohibited content and start giggling and
walk away, that probably doesn’t reflect well. Always remember that others are watching
you.
The specific steps you take will depend on your policy, but here are some general
guidelines:
Follow your policies exactly as they are written. Yes, we’ve already said this several
times. It’s crucial that you do this. Not following the policies and procedures can derail your
case against the offender and possibly set you up for problems as well.
If you are the first responder, get a verifier. Your first priority as the first responder is to
identify the improper activity or content. Then, you should always get someone else to verify
the material or action so it doesn’t turn into a situation of your word against someone else’s.
Immediately report the situation through proper channels.
Preserve the data or device. The data or device should immediately be removed from the
possession of the offending party and preserved. This will ensure that the data doesn’t
mysteriously disappear before the proper parties are notified.
Follow the right chain of custody. The removed materials should be secured and turned
over to the proper authorities. Depending on the situation, materials may be held in a safe,
locked location at the office, or they may need to be turned over to local authorities. Always
document the findings and who has custody of the offensive materials.
Once this first part is complete, then it’s a matter of levying the right punishment for the
infraction.
Situations involving prohibited content or activities are not easy to deal with. The accused
person might get angry or confrontational, so it’s important to always have the right people
there to help manage and diffuse the situation. If you feel that the situation is severe enough
and are worried about your own personal safety, don’t be afraid to involve the police if
needed. While the situation needs to be dealt with, there’s no sense in putting yourself in
direct danger to do so.