0% found this document useful (0 votes)
36 views

Pc-Hw-Net - Unit-1

This document provides an overview of computer hardware and networking topics. It discusses the introduction to computer hardware, including the history of computers from the abacus to early electronic computers. The main hardware components of a personal computer are identified, such as the CPU, motherboard, RAM, hard drives, power supply, keyboard, mouse, and monitor. Power supply units and uninterruptible power supplies are also covered. The document then discusses computer management and servicing, including assembling and disassembling PCs, BIOS setup, operating systems, and partitioning hard drives. An introduction to networking is provided, including network types, models, topologies, addressing and sub-netting. Common network protocols are also summarized.

Uploaded by

Abbas Raji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Pc-Hw-Net - Unit-1

This document provides an overview of computer hardware and networking topics. It discusses the introduction to computer hardware, including the history of computers from the abacus to early electronic computers. The main hardware components of a personal computer are identified, such as the CPU, motherboard, RAM, hard drives, power supply, keyboard, mouse, and monitor. Power supply units and uninterruptible power supplies are also covered. The document then discusses computer management and servicing, including assembling and disassembling PCs, BIOS setup, operating systems, and partitioning hard drives. An introduction to networking is provided, including network types, models, topologies, addressing and sub-netting. Common network protocols are also summarized.

Uploaded by

Abbas Raji
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

Sem – V -- Pc Hardware And Networking

UNIT-1Introduction to computer hardware :

1.1ntroduction & Definition of Computer


1.1.1 Block Diagram of computer 1.1.2Classification of computer
1.1.3 Characteristics of Computers 1.1.4Types of Languages and language translators.
1.1.5 History and Generation of computers, Memory - Bits, Bytes, KB,MB,GB,TB,PB,EB,ZB,YB, Brontope
byte, Geeope Byte. Etc
IEC Units: kibi, mebi,gibi,tebi,pebi,exbi,zebi,yobi
1.1.6Computer Software, Types of Software with Ex. (System/Application/Utility S/W
1.1.7 Computer Hardware- Intro. to Hardware components of computer
1.2. Components and its parts
1.2.1.Identifying the Important Hardware Components of PC.- CPU, Motherboard, RAM, HDD, ODD,
SMPS, K/B, Mouse, Monitor (CRT,LCD,LED) etc
1.3. SMPS 1.3.1About SMPS
1.3.2 Types of SMPS 1.3.3 Power stored in UPS
1.3.4 Components and Circuits inside the SMPS Unit
1.4 UPS (Uninterrupted Power Supply)
1.4.1Types of UPS (Offline/Line Interactive & Online)
1.4.2 Working Principle of each type of UPS.
1.4.3 Connecting, Maintenance and Troubleshooting.

UNIT-2 Computer management and servicing :

2.1 Assembling and dissembling PCs


2.2 Introduction to BIOS / CMOS Setup, POST (Power On SelfTest)
2.1.1Introduction to BIOS/CMOS Setup, POST (Power On Self-Test
2.1.2 Demonstration of BIOS/CMOS Configuration (Date, Time, Enable/Disable Devices).
2.1.3 Dual BIOS Feature
2.1.4 BIOS/CMOS Setup, Booting Sequence/Boot Order
2.3 Introduction to Operating System
2.3.1Definition and types of Operating Systems - MSDos, Windows 9x/XP/Vista/7/8, Linux, MAC OS,
Android etc.
2.3.2 Process of Booting the Operating System.
2.3.3 Win XP/Win 7. Activation and Automatic Updating procedures.
2.4 Computer Management
2.4.1 Computer Management, Disk Management, Defragmentation,
2.4.2 Services and Applications,
2.4.3 local Users and Groups
2.4.4 Advanced System Settings
2.4.5 Device Manager, Task Manager, Windows Registry
2.5 Partitioning
2.5.1 Partitioning of Hard Drive - Primary, Extended, Logical partitions using Partition Tools.

UNIT-3 Overview of Networking


3.1 Overview of Networking
3.2 Classification of Networks–LAN, MAN, WAN
3.3 Hardware and Software Components, Wi-Fi, Bluetooth
1 | G. Hemasundara Rao, M.C.A.,
3.5 Network Communication Standards.
3.6 NETWORKING MODEL -OSI Reference Model, TCP/IP Reference Model
3.7 LAN Cables, Connectors, wireless network adapter
3.8 Wireless network adapter
3.9 Functions of LAN Tools
3.9.1 Anti-Magnetic mat
3.9.2 Anti-Magnetic Gloves
3.9.3 Crimping Tool
3.9.4 Cable Tester
3.9.5 Cutter
3.9.6 Loop back plug
3.9.7 Toner probe
3.9.8 Punch down tool
3.9.9 Protocol analyzer
3.9.10 Multi meter
3.10 Network Topologies
2.7.1 Bus
2.7.2 Ring
2.7.3 Star
2.7.4 Mesh
2.7.5 Hybrid Topologies

UNIT- 4 Network Addressing and sub-netting :

4.1 Network Addressing.


4.2 TCP/IP Addressing Scheme
4.3 Components of IP Address and classes
4.4 Sub-netting
4.5 Internet Protocol Addressings - IPv4 ,IPv6
4.6 Classful addressing and classless addressing
UNIT-5 Networks protocols and management
5.1 protocols in computer networks
5.2. Hyper Text Transfer Protocol(HTTP)
5.2.1 File Transfer Protocol(FTP)
5.2.2 Simple Mail Transfer Protocol(SMTP)
5.2.3address Resolution Protocol(ARP)
5.2.4 Reverse Address Resolution Protocol(RARP)
5.3. Telnet, ICMP
5.4. Simple Network Management Protocol(SNMP)
5.5. DHCP, DNS
5.6 Network Management.
5.7 Network Monitoring and Troubleshooting.
5.8 Remote Monitoring (RMON).

Text Book:
1. “Introduction to Data Communications and Networking”, B. Forouzan,TataMcGrawHill
2. “Computer Networks”, Tanenbaum, PHI,
3. PC AND CLONES Hardware, Troubleshooting and Maintenance B. Govinda rajalu, Tata Mc-graw-Hill
Publication

2 | G. Hemasundara Rao, M.C.A.,


UNIT – 1

Information Technology : The term information technology refers to the subjects related to
creating, managing, processing and exchanging information. It defines an Industry that uses
computers, networking, software programming, and other equipment and processes to store,
process, retrieve, transmit and protect information.

Computers are used in almost all walks of life. Computers are widely used in several
fields, such as education, communication, entertainment, banking, medicine, weather forecasting
and scientific research.

Introduction to computers :

Historical Evolution of Computers:

 The mechanical calculating device ABACUS was principally used to add and subtract. It was
probably used by Babylonians as early as 2200 B.C.
 In 1645, The French mathematician Blaise Pascal produced Pascaline, recognized as the
first mechanical calculator but it was only capable of performing additions, subtractions.
 In 1820, a young genius named Charles Babbage, an English mathematician gave much
thought to the design of a device that uses the ―differences‖. Babbage in 1822
constructed a working model to illustrate the principle the
o difference engine”. Babbage proposed a design for another device named as
o Analytical engine. Charles Babbage is known as the ―Father of modern digital
computer‖.
 In 1890, Herman Hollerith had perfected his Tabulating system and developed machine
called census tabulator.
 Early 1940s first electronic computers,

ENIAC (Electronic Numerator Integrator And calculator),


EDSAC (Electronic Delay Storage Automatic Calculator ),
EDVAC (Electronic Discrete Variable Automatic Computer )
UNIVAC (Universal Automatic Computer).

Computer Definition: A computer is an electronic device which is used to input the data, process
the raw data as per the instructions and display the result or Output with High speed an Accuracy.
(Or)
A computer is an electronic device which processes raw data according to the specified
instructions and produce the information as output with high speed and accuracy.
(Or)
A computer can be defined as an Electronic device that accepts data, process them at a high
speed according to a set of instructions provided to it and process the desired result.

3 | G. Hemasundara Rao, M.C.A.,


INPUT PROCESS OUTPUT

Main Functionalities of computer:

1. Accepts Data (Input)

2. Store Data(Storage)

3. Process Data as Per Instructions

4. Retrieve stored Data as and when required

5. Print the result in desired format.

Characteristics of Computer (Or) Features of Computer: A computer is equipped with number


of characteristics that help it to handle the different problems more efficiently.

A) Speed : A computer performs operations with High speed. It can process Millions of
Instructions in fraction of seconds. Generally the speed of computer is measured in Terms of
Microseconds (10-6), Nanoseconds (10-9) and even Picoseconds (10-12).

B) Accuracy : Computer System Always produces Accurate Results with valid data and
instructions. The computer has performed calculations 100% Error free. Computers never make a
mistake.

C) Diligence : Computer is being a machine, is free from monotony, tiredness, lack of


concentration. It can do repeated work with same speed and accuracy.

D) Versatile : Computer is a versatile machine. Computers can do variety of jobs depending upon
the Instructions to them. It can perform multiple tasks simultaneously. The same machine works
in different fields with different applications to perform various tasks.

Examples are playing a game, print a document, and send E- mails, Etc.

E) Storage : A Computer can store large volumes of data in the small Devices. We can store any
kind of data in computers storage. This data can be text, picture, sound, video and programs Etc.

F) Reliability : computerized storage of data is much more reliable than the manual storage.
Computers have built-in diagnostic capabilities, which can help in continuous monitoring of the
system.

4 | G. Hemasundara Rao, M.C.A.,


G) Automation : Computers are Automatic machines they can work on any given job
automatically till it gets finished without any human Interference.

Computer Limitations:

a) A computer can only perform what is programmed to do.

b) Computers have no emotion. It has no feelings because it is a machine.

c) Computers possess no intelligence of its Own. Its I.Q. is zero.

d) Computers cannot learn with experience.

e) Dependent on human being.

f) Computer cannot make Judgment which a human being makes in day to day life.

Applications of Computers (or) Uses of Computers : Computers are widely used in several
fields, such as business, education, communication, entertainment, banking, medicine, weather
forecasting and scientific research etc

 Desktop publishing (Photoshop, PageMaker, etc)


 Engineering
 Medical
 Home
 Accountancy
 Word processing
 Government Sector
 Private Sector
 Internet
 Graphics and Multimedia
 Sports
 Defense Etc….

BLOCK DIAGRAM OF COMPUTER (OR) COMPUTER ORGANIZATION: The computer can vary in
size, speed and capacity depending on circuit or hardware design but it has same functional
organization. All types of computers follow a same basic logical structure for converting raw data
into information.

5 | G. Hemasundara Rao, M.C.A.,


Basically there are essential units of computer and these units are:

1. Input Unit

2. Central processing Unit (C.P.U)

3. Output Unit

1 Input Unit: This unit contains devices with the help which we enter data and instructions into
computer. An Input device converts the data and instructions into Binary form (Machine) for
acceptance by the computer. The most commonly used Input device is a Keyboard.

2. Central processing Unit (C.P.U): The CPU is considered as the ―Brain of the Computer. It is
also called as Microprocessor. All major calculations and comparisons are made inside the CPU.

CPU can be divided into different units:

 Arithmetic and Logical Unit (A.L.U)


 Control Unit (C.U)
 Memory Unit (M.U)

i) Arithmetic and Logical Unit (A.L.U): It is the unit where the actual executions of instructions
are takes place. All the Arithmetic calculations such as Addition, subtraction, multiplication and
Division as well as all the Logical (decision making) calculations i.e. comparisons are done in ALU.

ii) Control Unit (C.U): The control unit controls all the activities of the computer, it also control
each and every part of the computer. Control unit acts as monitor that tells other components
what to do, when to do, and how to do.

iii) Memory Unit (M.U): This unit can also be called as storage unit and it is used to store, retrieve
instructions and data. There are two types of memory.

6 | G. Hemasundara Rao, M.C.A.,


a. Primary or main memory: This memory is directly accessed by a
computer‘s CPU. It is made up of mainly semiconductor memories in the form of chips.
E.g. RAM, ROM, Cache Memory.
b. Secondary memory: This memory is not to be directly accessed by CPU. The
information or data from these memories is first transferred to RAM i.e. main memory and then
can be accessed by CPU.
E.g. Hard disk, magnetic disk, magnetic tape, floppy disk.

3. Output Unit: Output unit receives the stored results from memory unit, converts it into a form
the user can understand. This unit supplies converted results to the outside world through output
device. Some generally used output devices are monitor, printers.

Types of Computers (or) Classification of Computers :

The classification may depend on size, technology, area of technology, type of data processed etc.

I. According to purpose wise


II. According to the logic used
III. According to the size and capacity

7 | G. Hemasundara Rao, M.C.A.,


I. According to purpose: Depending upon the purpose of use, the computers can be General or
Special purpose.

General Purpose Computers: These computers can be used for all general needs of all
environments & users. These are the versatile computers that can perform a variety of jobs for a
variety of Environments. Some general works are calculating accounts, writing letters, playing
games, watching movies and accessing Internet etc. Ex: personal computers.

Special Purpose Computers: These computers are specially designed to perform a specific task of a
specific environment. That‘s why these computers are not versatile. They are designed, made and
used for only a single job. Ex: Super computers.

II. According to Logical Technology: According to the logic used by the computer, it can be
classified into Analog, Digital, and Hybrid computers.

Analog Computers: Analog computers are used to measure the physical quantities like pressure,
temperature, speed etc. These computers accept input data in the form of signals and convert
them to numeric values. For example: A thermometer does not perform any calculations but
measures the temperature of the body.

The representation of flow of data in Analog computer

Digital Computers: The computers which accept the data in the form of binary digits (bits)
representing high (1) or low (0) signals are called digital computers. These computers basically
work by counting and adding the binary digits.

Hybrid Computers: These computers have the features of both digital and analog computers.
These computers are useful in those environments, where both digital and analog signals are used
in processing.

Ex: E.C.G machine in hospitals, Robot etc.

Classification of Digital Computer Systems: According to the size, application areas and
capabilities, computers can be classified as Micro, Mini, Mainframe and Super computers.

8 | G. Hemasundara Rao, M.C.A.,


1. Super computer

2. Mainframe computer

3. Mini computer

4. Micro computer

a) Desktop

b) Laptop

c) Hand-Held Models

Super Computers:

i. These computers are characterized as being the fastest, with very high processing speed,

very large size, most powerful and most expensive (millions of dollars).

ii. These computers contain multiple processors that work together to solve a single problem.

iii. These computers have huge main memories and secondary storage.

iv. These are widely used in complex scientific applications like

a. Weather forecasting

b. Defense

c. Nuclear Energy Research

d. Genetic Engineering

e. Geological Data

Some Examples are

CRAY-I (First super computer in the world).

PARAM (First super computer in India).

BLUE GENIE (Fastest super computer in world developed by IBM).

Mainframes Computers:

i. A mainframe computer is a very large size computer; require proper air conditioning and

capable of handling, processing very large amounts of data quickly.

ii. These computers are used in large organizations like government agencies, Banks,

9 | G. Hemasundara Rao, M.C.A.,


Railway and Flight Reservations where a large number of people need frequent access to

the same data. They are generally used in centralized databases.

iii. There are basically two types of terminals that can be used with mainframe systems that

are

a. Dumb terminals: Dumb terminals consist of only a monitor and a keyboard or mouse.
They do not have their own CPU and memory and use mainframe system‘s CPU and
memory.

b. Intelligent terminals: Intelligent terminals have their own processor and it can

perform some processing operations. These are used as servers on the World Wide Web.

Mini Computers :

i. Mini computers are smaller, cheaper and slower than Mainframes.

ii. These computers are widely used in business, Education, Hospitals Etc. They are also
used as servers in Local Area Networks (LAN).

iii. A large number of computers can be connected to a network with a Mini computer acting

as a server. It is capable of supporting 4to about 200 simultaneous users.

Micro Computers :

i. The term ‗Micro‘ suggests only the size but not the capacity, they are capable to do all Input -
Output operations.

ii. In Micro computers, microprocessor performs the function of ALU and CU and it is connected
with primary, secondary memory and I/O devices.

iii. A Microcomputer is a computer designed for individual use. These include Desktop PC, Laptop
and handheld models Etc.

COMPUTER GENERATIONS : The word ―Generation‖ is described as a stage of technological


development or innovation; due to technological advancement, different changes come in the
computer system. There are Five Computer Generations.

Generation Period Electronic Component used


First Generation 1950s Vacuum Tubes
Second Generation 1960s Transistor
Third Generation 1970s Integrated Circuit
Fourth Generation 1980s Microprocessor with VLSI
Fifth Generation Present and Beyond Artificial Intelligence

10 | G. Hemasundara Rao, M.C.A.,


First Generation (1950s) - Vacuum Tubes : In First Generation Computers used Vacuum tubes
as the main electronic component responsible for processing data.

Features of First Generation Computer :

1. These computers were physically large in size and required large rooms for installation.

2. Magnetic Drums were used for memory; Input was based on punched cards and paper
tape, the output was generated on printouts.

3. They also consume high electricity, they generate lot of heat. These computers require
continuous maintenance and large Air-conditioners.
4. They lacked in versatility and speed.

5. They were unreliable and non-portable.

6. These computers could be programmed using machine language, which is the lowest-level
of programming language. (Since machine language was used, these computers were
difficult to program and use).

7. These computers had limited commercial use.

8. The UNIVAC and ENIAC Computers are examples of first generation computers.

Second Generation (1960s) – Transistor : The vacuum tubes of First Generation are replaced
by Transistors to arrive at second generation.

Features of Second Generation Computer :

1. Since Transistor is a small device, the physical size of computer was greatly reduced.

2. Computers became smaller, faster, cheaper, energy–efficient and more reliable than their
predecessors.
3. These were more portable and generate less amount of heat.

4. Magnetic core was used as primary memory and Magnetic disks as secondary storage
devices. However Input was based on punched cards and paper tape, the output was
generated on printouts.

5. These computers were more versatile than first generation.

6. Frequent maintenance is required. Air conditioning was required.

7. Assembly language was used to program computers. Hence, programming became more
time efficient and less cumbersome.
8. High level programming languages COBOL, FORTRAN Are developed.

11 | G. Hemasundara Rao, M.C.A.,


Third Generation (1970s) – Integrated Circuits : These computers used Integrated Circuits
(ICs) of silicon chips in the place of transistors. An Integrated Circuit consists of a single chip with
many components such as transistors and resistors fabricated on it.

Features of Third Generation Computer :

1. They were smaller, cheaper, and more reliable than their predecessors.

2. Less power is requirement, a very low heat generated.

3. Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors.
4. They had high processing speed than second generation.

5. Since hardware rarely failed, the maintenance cost was quite low.

6. Development of standardized High-level languages like PASCAL, BASIC etc.

7. They were suitable for scientific and commercial applications.

Fourth Generation (1980s) – Microprocessor : The microprocessor launched the fourth


generation of computers with thousands of ICs built on to a single silicon chip. The fourth
generation computers led to an era of Large Scale Integration (LSI) and Very Large Scale
Integration (VLSI) technology.

Features of Fourth Generation Computer :

1. The fourth generation computers become more powerful, compact, reliable and affordable.

As a result, it gave rise to the personal computer (PC) revolution.

2. These machines consume less power and generate negligible amount of heat, hence they
do not require air conditioning.
3. Hardware failure is negligible so minimum maintenance is required.

4. The development of GUIs, Mouse and handheld devices in this generation.

5. Interconnection of computers leads to better communication and resource sharing.


Fifth Generation (Present-Beyond) – Artificial Intelligence

Fifth generation : computers, based on Artificial Intelligence (AI) are still in development. The
Speed is extremely high in Fifth generation. SLSI, AI and Parallel Processing are developed in this
generation.

Features of Fifth Generation Computer:

12 | G. Hemasundara Rao, M.C.A.,


Artificial Intelligence (AI): It refers to a series of related technologies that tries to simulate and
reproduce human behavior, Including thinking, speaking, reasoning. AI comprises a group of
related technologies Expert systems (ES), Robotics, speech recognition, Games (chess), Play
station etc.

Mega Chips: Fifth generation computers will use Super Large Scale Integrated (SLSI) chips, which
will result in the Production of Microprocessors having millions of electronic components on a
single chip.

Parallel Processing: A computer using parallel processing accesses several instructions at once
and works on them at the same time through multiple central processing units.

TYPES OF LANGUAGES :
computer programming language, any of various languages for expressing a set of detailed
instructions for a digital computer. Such instructions can be executed directly when they are in the
computer manufacturer-specific numerical form known as machine language, after a simple
substitution process when expressed in a corresponding assembly language, or after translation
from some “higher-level” language. Although there are many computer languages, relatively few
are widely used.
Machine and assembly languages are “low-level,” requiring a programmer to manage
explicitly all of a computer’s features of data storage and operation. In contrast, high-level
languages shield a programmer from worrying about such considerations and provide a notation
that is more easily written and read by programmers.

Language types :
Machine and assembly languages :
A machine language consists of the numeric codes for the operations that a particular
computer can execute directly. The codes are strings of 0s and 1s, or binary digits (“bits”), which
are frequently converted both from and to hexadecimal (base 16) for human viewing and
modification. Machine language instructions typically use some bits to represent operations, such
as addition, and some to represent operands, or perhaps the location of the next instruction.
Machine language is difficult to read and write, since it does not resemble conventional
mathematical notation or human language, and its codes vary from computer to computer.
Assembly language is one level above machine language. It uses short mnemonic codes for
instructions and allows the programmer to introduce names for blocks of memory that hold data.
One might thus write “add pay, total” instead of “0110101100101000” for an instruction that adds
two numbers.

Assembly language is designed to be easily translated into machine language. Although


blocks of data may be referred to by name instead of by their machine addresses, assembly
language does not provide more sophisticated means of organizing complex information. Like
machine language, assembly language requires detailed knowledge of internal computer
architecture. It is useful when such details are important, as in programming a computer to
interact with peripheral devices (printers, scanners, storage devices, and so forth).

13 | G. Hemasundara Rao, M.C.A.,


Algorithmic languages :
Algorithmic languages are designed to express mathematical or symbolic computations. They can
express algebraic operations in notation similar to mathematics and allow the use of subprograms
that package commonly used operations for reuse. They were the first high-level languages.

FORTRAN :
The first important algorithmic language was FORTRAN (formula translation), designed in
1957 by an IBM team led by John Backus. It was intended for scientific computations with real
numbers and collections of them organized as one- or multidimensional arrays. Its control
structures included conditional IF statements, repetitive loops (so-called DO loops), and a GOTO
statement that allowed non-sequential execution of program code. FORTRAN made it convenient
to have subprograms for common mathematical operations, and built libraries of them.
FORTRAN was also designed to translate into efficient machine language. It was immediately
successful and continues to evolve.

ALGOL
ALGOL (algorithmic language) was designed by a committee of American and European
computer scientists during 1958–60 for publishing algorithms, as well as for doing computations.
Like LISP (described in the next section), ALGOL had recursive subprograms—procedures that
could invoke themselves to solve a problem by reducing it to a smaller problem of the same kind.
ALGOL introduced block structure, in which a program is composed of blocks that might contain
both data and instructions and have the same structure as an entire program. Block structure
became a powerful tool for building large programs out of small components.
ALGOL contributed a notation for describing the structure of a programming language, Backus–
Naur Form, which in some variation became the standard tool for stating the syntax (grammar) of
programming languages. ALGOL was widely used in Europe, and for many years it remained the
language in which computer algorithms were published. Many important languages, such
as Pascal and Ada (both described later), are its descendants.

C
The C programming language was developed in 1972 by Dennis Ritchie and Brian
Kernighan at the AT&T Corporation for programming computer operating systems. Its capacity to
structure data and programs through the composition of smaller units is comparable to that of
ALGOL. It uses a compact notation and provides the programmer with the ability to operate with
the addresses of data as well as with their values. This ability is important in systems
programming, and C shares with assembly language the power to exploit all the features of a
computer’s internal architecture. C, along with its descendant C++, remains one of the most
common languages.

Business-oriented languages
COBOL
COBOL (common business oriented language) has been heavily used by businesses since its
inception in 1959. A committee of computer manufacturers and users and U.S. government
organizations established CODASYL (Committee on Data Systems and Languages) to develop and
oversee the language standard in order to ensure its portability across diverse systems.
COBOL uses an English-like notation—novel when introduced. Business computations organize
and manipulate large quantities of data, and COBOL introduced the record data structure for such
tasks. A record clusters heterogeneous data—such as a name, an ID number, an age, and an
14 | G. Hemasundara Rao, M.C.A.,
address—into a single unit. This contrasts with scientific languages, in which homogeneous arrays
of numbers are common. Records are an important example of “chunking” data into a single
object, and they appear in nearly all modern languages.

SQL
SQL (Structured Query Language) is a language for specifying the organization
of databases (collections of records). Databases organized with SQL are called relational, because
SQL provides the ability to query a database for information that falls in a given relation. For
example, a query might be “find all records with both last name Smith and city New York.”
Commercial database programs commonly use an SQL-like language for their queries.

Education-oriented languages
BASIC
BASIC (beginner’s all-purpose symbolic instruction code) was designed at Dartmouth
College in the mid-1960s by John Kemeny and Thomas Kurtz. It was intended to be easy to learn
by novices, particularly non-computer science majors, and to run well on a time-sharing
computer with many users. It had simple data structures and notation and it was interpreted: a
BASIC program was translated line-by-line and executed as it was translated, which made it easy
to locate programming errors.
Its small size and simplicity also made BASIC a popular language for early personal computers. Its
recent forms have adopted many of the data and control structures of other contemporary
languages, which makes it more powerful but less convenient for beginners.

Pascal
About 1970 Niklaus Wirth of Switzerland designed Pascal to teach structured
programming, which emphasized the orderly use of conditional and loop control structures
without GOTO statements. Although Pascal resembled ALGOL in notation, it provided the ability to
define data types with which to organize complex information, a feature beyond the capabilities of
ALGOL as well as FORTRAN and COBOL. User-defined data types allowed the programmer to
introduce names for complex data, which the language translator could then check for correct
usage before running a program.
During the late 1970s and ’80s, Pascal was one of the most widely used languages for
programming instruction. It was available on nearly all computers, and, because of its familiarity,
clarity, and security, it was used for production software as well as for education.

Logo
Logo originated in the late 1960s as a simplified LISP dialect for education; Seymour
Papert and others used it at MIT to teach mathematical thinking to schoolchildren. It had a more
conventional syntax than LISP and featured “turtle graphics,” a simple method for
generating computer graphics. (The name came from an early project to program a turtlelike
robot.) Turtle graphics used body-centred instructions, in which an object was moved around a
screen by commands, such as “left 90” and “forward,” that specified actions relative to the current
position and orientation of the object rather than in terms of a fixed framework. Together with
recursive routines, this technique made it easy to program intricate and attractive patterns.

Hypertalk
Hypertalk was designed as “programming for the rest of us” by Bill Atkinson for Apple’s
Macintosh. Using a simple English-like syntax, Hypertalk enabled anyone to combine text,
15 | G. Hemasundara Rao, M.C.A.,
graphics, and audio quickly into “linked stacks” that could be navigated by clicking with
a mouse on standard buttons supplied by the program. Hypertalk was particularly popular among
educators in the 1980s and early ’90s for classroom multimedia presentations. Although
Hypertalk had many features of object-oriented languages (described in the next section), Apple
did not develop it for other computer platforms and let it languish; as Apple’s market share
declined in the 1990s, a new cross-platform way of displaying multimedia left Hypertalk all but
obsolete (see the section World Wide Web display languages).

Object-oriented languages :
Object-oriented languages help to manage complexity in large programs. Objects package
data and the operations on them so that only the operations are publicly accessible and internal
details of the data structures are hidden. This information hiding made large-scale programming
easier by allowing a programmer to think about each part of the program in isolation. In addition,
objects may be derived from more general ones, “inheriting” their capabilities. Object-oriented
programming began with the Simula language (1967), which added information hiding to ALGOL.

C++ :
The C++ language, developed by Bjarne Stroustrup at AT&T in the mid-1980s,
extended C by adding objects to it while preserving the efficiency of C programs. It has been one of
the most important languages for both education and industrial programming. Large parts of
many operating systems were written in C++. C++, along with Java, has become popular for
developing commercial software packages that incorporate multiple interrelated applications. C++
is considered one of the fastest languages and is very close to low-level languages, thus allowing
complete control over memory allocation and management. This very feature and its many other
capabilities also make it one of the most difficult languages to learn and handle on a large scale.

C#
C# (pronounced C sharp like the musical note) was developed by Anders Hejlsberg at
Microsoft in 2000. C# has a syntax similar to that of C and C++ and is often used for developing
games and applications for the Microsoft Windows operating system.

Ada
Ada was named for Augusta Ada King, countess of Lovelace, who was an assistant to the
19th-century English inventor Charles Babbage, and is sometimes called the first computer
programmer. Ada, the language, was developed in the early 1980s for the U.S. Department of
Defense for large-scale programming. It combined Pascal-like notation with the ability to package
operations and data into independent modules. Its first form, Ada 83, was not fully object-
oriented, but the subsequent Ada 95 provided objects and the ability to construct hierarchies of
them. While no longer mandated for use in work for the Department of Defense, Ada remains an
effective language for engineering large programs.

Java
In the early 1990s Java was designed by Sun Microsystems, Inc., as a programming
language for the World Wide Web (WWW). Although it resembled C++ in appearance, it was
object-oriented. In particular, Java dispensed with lower-level features, including the ability to
manipulate data addresses, a capability that is neither desirable nor useful in programs for
distributed systems. In order to be portable, Java programs are translated by a Java Virtual
Machine specific to each computer platform, which then executes the Java program. In addition to
16 | G. Hemasundara Rao, M.C.A.,
adding interactive capabilities to the Internet through Web “applets,” Java has been widely used
for programming small and portable devices, such as mobile telephones.

Visual Basic
Visual Basic was developed by Microsoft to extend the capabilities of BASIC by adding
objects and “event-driven” programming: buttons, menus, and other elements of graphical user
interfaces (GUIs). Visual Basic can also be used within other Microsoft software to program small
routines. Visual Basic was succeeded in 2002 by Visual Basic .NET, a vastly different language
based on C#, a language with similarities to C++.

Python
The open-source language Python was developed by Dutch programmer Guido van Rossum
in 1991. It was designed as an easy-to-use language, with features such as using indentation
instead of brackets to group statements. Python is also a very compact language, designed so that
complex jobs can be executed with only a few statements. In the 2010s, Python became one of the
most popular programming languages, along with Java and JavaScript.

Declarative languages
Declarative languages, also called nonprocedural or very high level, are programming
languages in which (ideally) a program specifies what is to be done rather than how to do it. In
such languages there is less difference between the specification of a program and its
implementation than in the procedural languages described so far. The two common kinds of
declarative languages are logic and functional languages.

Logic programming languages, of which PROLOG (programming in logic) is the best known,
state a program as a set of logical relations (e.g., a grandparent is the parent of a parent of
someone). Such languages are similar to the SQL database language. A program is executed by an
“inference engine” that answers a query by searching these relations systematically to
make inferences that will answer a query. PROLOG has been used extensively in natural language
processing and other AI programs.

Functional languages have a mathematical style. A functional program is constructed by applying


functions to arguments. Functional languages, such as LISP, ML, and Haskell, are used as research
tools in language development, in automated mathematical theorem provers, and in some
commercial projects.

Scripting languages
Scripting languages are sometimes called little languages. They are intended to solve
relatively small programming problems that do not require the overhead of data declarations and
other features needed to make large programs manageable. Scripting languages are used for
writing operating system utilities, for special-purpose file-manipulation programs, and, because
they are easy to learn, sometimes for considerably larger programs.
Perl was developed in the late 1980s, originally for use with the UNIX operating system. It
was intended to have all the capabilities of earlier scripting languages. Perl provided many ways
to state common operations and thereby allowed a programmer to adopt any convenient style. In
the 1990s it became popular as a system-programming tool, both for small utility programs and
for prototypes of larger ones. Together with other languages discussed below, it also became
popular for programming computer Web “servers.”
17 | G. Hemasundara Rao, M.C.A.,
Document formatting languages
Document formatting languages specify the organization of printed text and graphics. They
fall into several classes: text formatting notation that can serve the same functions as a word
processing program, page description languages that are interpreted by a printing device, and,
most generally, markup languages that describe the intended function of portions of a document.

TeX
TeX was developed during 1977–86 as a text formatting language by Donald Knuth,
a Stanford University professor, to improve the quality of mathematical notation in his books. Text
formatting systems, unlike WYSIWYG (“What You See Is What You Get”) word
processors, embed plain text formatting commands in a document, which are then interpreted by
the language processor to produce a formatted document for display or printing. TeX marks italic
text, for example, as {\it this is italicized}, which is then displayed as this is italicized.

PostScript
PostScript is a page-description language developed in the early 1980s by Adobe Systems
Incorporated on the basis of work at Xerox PARC (Palo Alto Research Center). Such languages
describe documents in terms that can be interpreted by a personal computer to display the
document on its screen or by a microprocessor in a printer or a typesetting device.

SGML
SGML (standard generalized markup language) is an international standard for the
definition of markup languages; that is, it is a meta language. Markup consists of notations called
tags that specify the function of a piece of text or how it is to be displayed.
SGML is used to specify DTDs (document type definitions). A DTD defines a kind of
document, such as a report, by specifying what elements must appear in the document—e.g.,
<Title>—and giving rules for the use of document elements, such as that a paragraph may appear
within a table entry but a table may not appear within a paragraph.

World Wide Web Display Languages


HTML
The World Wide Web is a system for displaying text, graphics, and audio retrieved over
the Internet on a computer monitor. Each retrieval unit is known as a Web page, and such pages
frequently contain “links” that allow related pages to be
retrieved. HTML (hypertext markup language) is the markup language for encoding Web pages. It
was designed by Tim Berners-Lee at the CERN nuclear physics laboratory in Switzerland during
the 1980s and is defined by an SGML DTD. HTML markup tags specify document elements such as
headings, paragraphs, and tables. They mark up a document for display by a computer
program known as a Web browser. The browser interprets the tags, displaying the headings,
paragraphs, and tables in a layout that is adapted to the screen size and fonts available to it.
HTML documents also contain anchors, which are tags that specify links to other Web
pages. An anchor has the form <A HREF= “http://www.britannica.com”> Encyclopædia
Britannica</A>, where the quoted string is the URL (uniform resource locator) to which the link
points (the Web “address”) and the text following it is what appears in a Web browser, underlined
to show that it is a link to another page. What is displayed as a single page may also be formed
from multiple URLs, some containing text and others graphics.

18 | G. Hemasundara Rao, M.C.A.,


XML
HTML does not allow one to define new text elements; that is, it is not
extensible. XML (extensible markup language) is a simplified form of SGML intended for
documents that are published on the Web. Like SGML, XML uses DTDs to define document types
and the meanings of tags used in them. XML adopts conventions that make it easy to parse, such as
that document entities are marked by both a beginning and an ending tag, such as
<BEGIN>…</BEGIN>. XML provides more kinds of hypertext links than HTML, such as
bidirectional links and links relative to a document subsection.
Because an author may define new tags, an XML DTD must also contain rules that instruct a
Web browser how to interpret them—how an entity is to be displayed or how it is to generate an
action such as preparing an e-mail message.

Web scripting :
Web pages marked up with HTML or XML are largely static documents. Web scripting can
add information to a page as a reader uses it or let the reader enter information that may, for
example, be passed on to the order department of an online business. CGI (common gateway
interface) provides one mechanism; it transmits requests and responses between the reader’s
Web browser and the Web server that provides the page. The CGI component on the server
contains small programs called scripts that take information from the browser system or provide
it for display. A simple script might ask the reader’s name, determine the Internet address of the
system that the reader uses, and print a greeting. Scripts may be written in any programming
language, but, because they are generally simple text-processing routines, scripting languages like
PERL are particularly appropriate.

Elements of programming :
Despite notational differences, contemporary computer languages provide many of the
same programming structures. These include basic control structures and data structures. The
former provide the means to express algorithms, and the latter provide ways to organize
information.

Control structures :
Programs written in procedural languages, the most common kind, are like recipes, having
lists of ingredients and step-by-step instructions for using them. The three basic control structures
in virtually every procedural language are:

 1. Sequence—combine the liquid ingredients, and next add the dry ones.
 2. Conditional—if the tomatoes are fresh then simmer them, but if canned, skip this step.
 3. Iterative—beat the egg whites until they form soft peaks.

Sequence is the default control structure; instructions are executed one after another. They
might, for example, carry out a series of arithmetic operations, assigning results to variables, to
find the roots of a quadratic equation ax2 + bx + c = 0. The conditional IF-THEN or IF-THEN-ELSE
control structure allows a program to follow alternative paths of execution. Iteration, or looping,
gives computers much of their power. They can repeat a sequence of steps as often as necessary,
and appropriate repetitions of quite simple steps can solve complex problems.
These control structures can be combined. A sequence may contain several loops; a loop may
contain a loop nested within it, or the two branches of a conditional may each contain sequences
with loops and more conditionals. In the “pseudocode” used in this article, “*” indicates
19 | G. Hemasundara Rao, M.C.A.,
multiplication and “←” is used to assign values to variables. The following programming fragment
employs the IF-THEN structure for finding one root of the quadratic equation, using the quadratic
formula:

.The quadratic formula assumes that a is nonzero and that the discriminant (the portion
within the square root sign) is not negative (in order to obtain a real number root). Conditionals
check those assumptions:

 IF a = 0 THEN
 ROOT ← −c/b
 ELSE
 DISCRIMINANT ← b*b − 4*a*c
 IF DISCRIMINANT ≥ 0 THEN
 ROOT ← (−b + SQUARE_ROOT(DISCRIMINANT))/2*a
 ENDIF
 ENDIF

Data structures
Whereas control structures organize algorithms, data structures organize information. In
particular, data structures specify types of data, and thus which operations can be performed on
them, while eliminating the need for a programmer to keep track of memory addresses. Simple
data structures include integers, real numbers, Booleans (true/false), and characters or character
strings. Compound data structures are formed by combining one or more data types.
Record components, or fields, are selected by name; for example, E.SALARY might
represent the salary field of record E. An array element is selected by its position or index; A[10] is
the element at position 10 in array A. A FOR loop (definite iteration) can thus run through an array
with index limits (FIRST TO LAST in the following example) in order to sum its elements:

Arrays and records have fixed sizes. Structures that can grow are built
with dynamic allocation, which provides new storage as required. These data structures have
components, each containing data and references to further components (in machine terms, their
addresses). Such self-referential structures have recursive definitions.

Abstract data types (ADTs) are important for large-scale programming. They package
data structures and operations on them, hiding internal details. For example, an ADT table
provides insertion and lookup operations to users while keeping the underlying structure,
whether an array, list, or binary tree, invisible. In object-oriented languages, classes are ADTs and
objects are instances of them.

Why Computer Language Translator?

The computer only understands machine code. It is unable to understand any low, assembly, or
high-level language. There must be a program to convert the source code into object code so that
your computer can understand it. This is the job of the language translator. The programmer
creates source code and then converts it to machine-readable format (object code)

Types of Computer Language Translator

20 | G. Hemasundara Rao, M.C.A.,


There are 3 types of computer language translators:
1. Compiler
2. Interpreter
3. Assembler

Compiler

The compiler is a language translator program that converts code written in a human-
readable language, such as high-level language, to a low-level computer language, such as
assembly language, machine code, or object code, and then produces an executable program.

In the process of compiling, the first code is sent to a lexer which will scan the source code
and split it into tokens and kept inside of computer memory, and send them to the parser where
patterns are recognized and are converted into an AST (abstract syntax tree) which describes the
data structure of the program representing then optimizer(if required) optimize away unused
variable, unreachable code, roll back if possible, etc, then code generator converts to machine
instruction code specific to the target platform and linker putting together all code into an
executable program. Also, there is an error handler in all the phases which handles errors and
reports.

Some of the well-known compilers are:

 Borland Turbo C
 Javac
 GNU compiler
 Xcode
 Roslyn
21 | G. Hemasundara Rao, M.C.A.,
 Visual C#
 CLISP
 Oracle Fortran

Some common compiled languages are C++, C

Characteristics of Compiler

 Source code is converted to machine code before runtime. So, code execution at runtime is
faster.
 Takes a lot of time to analyze and process the program. The compiling process is
complicated.
 But Program execution is Fast
 Cannot create an executable program when there is a compile type error in the program.

Advantage of Compiler

 The whole program is compiled and it seems to be more secure than Interpreted Code.
Code once compiled and when you view the compiled code then you will not be able to
understand it.
 Compiled Code is faster because compiled code is near to machine code.
 The program can run directly from object code and doesn't need source code.
 The compiler translates commands into machine language binaries, no other program or
application is needed to be installed to execute the executable file of sources codes. The
only thing needed is that each software has to be compiled for certain operating systems. If
an application is compiled for a particular OS architecture, the user simply needs to OS that
operates on the same OS architecture.

Disadvantage of Compiler

 For the executable file to be created, the source code must be error-free.
 For a large application, it may take a larger time to compile the code as compared to small
programs.
 When you compiled an application then it creates a new compiled file which takes
additional memory and space.

Interpreter

The interpreter converts high-level language to machine-level language, while the compiler
accomplishes the same but in a different method. The Interpreter's source code is transformed
into machine code at run time. The compiler, however, converts the code to machine code, i.e. an
executable file, before the program starts.

The interpreter program executes directly line by line by running the source code. So, it
takes the source code, one line at a time, and translates it and runs it by the processor, then moves
to the next line, translates it and runs it, and repeats until the program is finished.

Some of the popular interpreted languages are Php, Python, Java script, Ruby
22 | G. Hemasundara Rao, M.C.A.,
Characteristics of Interpreter

 Spends less time converting to machine code.


 No compilation stage is present in the interpreter while generating machine instructions.
 Program execution is slower because it gets converted to machine code at runtime.
 Easy for debugging and finding errors.

Advantage of Interpreter
Some of the main advantages of interpreters are as follows:

 Because interpreted code is not machine-dependent, so interpreted code can operate on


any system and be shared between platforms without incompatibility issues.
 The interpreter does not compile the code like a compiler, allowing you to publish the work
to a live environment more quickly.
 An interpreter does not create additional new files like a compiler, which saves memory
and space.

Disadvantage of Interpreter
Some of the main disadvantages of Interpreter are as follows:
 Interpreter is slower
 As interpreted codes can easily be read by humans so we can say data and code are
insecure.
 To run the code, a client or anybody else who has access to the shared source code must
have an interpreter installed on their system.

Assembler
Assembler converts code written in assembly language into machine-level code. Assembly
language contains machine opcode mnemonics so that assemblers translate from mnemonics to
direct instruction in 1:1 relation.
As we know the computer understands machine code only but programming is difficult for
developers in machine language. So, low-level assembly language(ASM) is designed for a specific
processor family that represents different symbolic code instructions.

Characteristics of Assembler
 As there is a 1:1 relationship exists between mnemonics to direct instruction, translating is
very fast.
 It requires less amount of memory and execution time.
 It does complex hardware-specific jobs in an easy way.
 It is ideal for a time-critical job.

Different Between Compiler, Interpreter, and Assembler

Compiler Interpreter Assembler


It converts programs It converts programs It converts the assembly language
written in a high- written in a high-level program into machine-level language.
level language into language into machine
23 | G. Hemasundara Rao, M.C.A.,
machine language language at runtime.
before runtime
Used By: C, C++ Used By: Python, PHP, Used By: GNU, GAS
Ruby, PostScript, LISP, VB
Compiled code runs For every time, the Runs fast as the translating of two low-
faster but compiling interpreter translates level languages depends on the processor i
time may take a every time code is run so it nstructions set only.
longer time. is slower.

Compiler vs Interpreter

Compiler Interpreter
Translate High-level language program into Translate High-level language program into
machine code before runtime machine code at runtime
The compiler needs a lot of time for the whole so It takes less time for the source code to
urce code to be analyzed analyze
The overall program execution time is relatively Overall program execution time is relatively
faster. slower.
The compiler only generates an error message Interpreter only shows one error at a time
only after scanning the whole program. And all and if solved and again after interpreting the
the errors are shown at the same time. code then shows the next error if exists.
It is easier to debug since it continues to
Debugging is relatively more difficult since there translate the program until the error is fixed.
can be an error anywhere in the code. Show only one error at a time, and if solved
then shows the next error if exists.
The interpreter does not generate
The compiler generates intermediate code.
intermediate code.
Compilation and execution take place
The compiler compiles the code before execution.
simultaneously.
Memory requirements are more because time is
required for the creation of intermediate object Requires less memory as it does not create
code. This intermediate object code resides in intermediate object code.
memory.
Difficult error detection and removal Easy for error detection and removal
The programming language that uses Compiler: C, The programming language that uses
C++, Java, C#, Scala Interpreters: Python, Perl, Ruby, PHP
The interpreter software is generally smaller
Compilers software are larger in size.
in size.
Focus on compile once, run anytime Focus on compiling every time.
The program doesn't run until all the error is A program runs the code and stops only
fixed. when an error is found.

24 | G. Hemasundara Rao, M.C.A.,


Brief History of Computers
The naive understanding of computation had to be overcome before the true power of
computing could be realized. The inventors who worked tirelessly to bring the computer into the
world had to realize that what they were creating was more than just a number cruncher or a
calculator. They had to address all of the difficulties associated with inventing such a machine,
implementing the design, and actually building the thing. The history of the computer is the history of
these difficulties being solved.

19th Century
1801 – Joseph Marie Jacquard, a weaver and businessman from France, devised a loom that
employed punched wooden cards to automatically weave cloth designs.
1822 – Charles Babbage, a mathematician, invented the steam-powered calculating machine capable
of calculating number tables. The “Difference Engine” idea failed owing to a lack of technology at the
time.
1848 – The world’s first computer program was written by Ada Lovelace, an English mathematician.
Lovelace also includes a step-by-step tutorial on how to compute Bernoulli numbers using Babbage’s
machine.
1890 – Herman Hollerith, an inventor, creates the punch card technique used to calculate the 1880
U.S. census. He would go on to start the corporation that would become IBM.

Early 20th Century


1930 – Differential Analyzer was the first large-scale automatic general-purpose mechanical
analogue computer invented and built by Vannevar Bush.
1936 – Alan Turing had an idea for a universal machine, which he called the Turing machine, that
could compute anything that could be computed.
1939 – Hewlett-Packard was discovered in a garage in Palo Alto, California by Bill Hewlett and David
Packard.

25 | G. Hemasundara Rao, M.C.A.,


1941 – Konrad Zuse, a German inventor and engineer, completed his Z3 machine, the world’s first
digital computer. However, the machine was destroyed during a World War II bombing strike on
Berlin.
1941 – J.V. Atanasoff and graduate student Clifford Berry devise a computer capable of solving 29
equations at the same time. The first time a computer can store data in its primary memory.
1945 – University of Pennsylvania academics John Mauchly and J. Presper Eckert create an Electronic
Numerical Integrator and Calculator (ENIAC). It was Turing-complete and capable of solving “a vast
class of numerical problems” by reprogramming, earning it the title of “Grandfather of computers.”
1946 – The UNIVAC I (Universal Automatic Computer) was the first general-purpose electronic
digital computer designed in the United States for corporate applications.
1949 – The Electronic Delay Storage Automatic Calculator (EDSAC), developed by a team at the
University of Cambridge, is the “first practical stored-program computer.”
1950 – The Standards Eastern Automatic Computer (SEAC) was built in Washington, DC, and it was
the first stored-program computer completed in the United States.

Late 20th Century


1953 – Grace Hopper, a computer scientist, creates the first computer language, which becomes
known as COBOL, which stands for COmmon, Business-Oriented Language. It allowed a computer
user to offer the computer instructions in English-like words rather than numbers.
1954 – John Backus and a team of IBM programmers created the FORTRAN programming language,
an acronym for FORmula TRANslation. In addition, IBM developed the 650.
1958 – The integrated circuit, sometimes known as the computer chip, was created by Jack Kirby and
Robert Noyce.
1962 – Atlas, the computer, makes its appearance. It was the fastest computer in the world at the
time, and it pioneered the concept of “virtual memory.”
1964 – Douglas Engelbart proposes a modern computer prototype that combines a mouse and a
graphical user interface (GUI).
1969 – Bell Labs developers, led by Ken Thompson and Dennis Ritchie, revealed UNIX, an operating
system developed in the C programming language that addressed program compatibility difficulties.
1970 – The Intel 1103, the first Dynamic Access Memory (DRAM) chip, is unveiled by Intel.
1971 – The floppy disc was invented by Alan Shugart and a team of IBM engineers. In the same year,
Xerox developed the first laser printer, which not only produced billions of dollars but also heralded
the beginning of a new age in computer printing.
1973 – Robert Metcalfe, a member of Xerox’s research department, created Ethernet, which is used
to connect many computers and other gear.
1974 – Personal computers were introduced into the market. The first were the Altair Scelbi & Mark-
8, IBM 5100, and Radio Shack’s TRS-80.
1975 – Popular Electronics magazine touted the Altair 8800 as the world’s first minicomputer kit in
January. Paul Allen and Bill Gates offer to build software in the BASIC language for the Altair.

26 | G. Hemasundara Rao, M.C.A.,


1976 – Apple Computers is founded by Steve Jobs and Steve Wozniak, who expose the world to the
Apple I, the first computer with a single-circuit board.
1977 – At the first West Coast Computer Faire, Jobs and Wozniak announce the Apple II. It has colour
graphics and a cassette drive for storing music.
1978 – The first computerized spreadsheet program, VisiCalc, is introduced.
1979 – WordStar, a word processing tool from MicroPro International, is released.
1981 – IBM unveils the Acorn, their first personal computer, which has an Intel CPU, two floppy
drives, and a colour display. The MS-DOS operating system from Microsoft is used by Acorn.
1983 – The CD-ROM, which could carry 550 megabytes of pre-recorded data, hit the market. This
year also saw the release of the Gavilan SC, the first portable computer with a flip-form design and
the first to be offered as a “laptop.”
1984 – Apple launched Macintosh during the Superbowl XVIII commercial. It was priced at $2,500
1985 – Microsoft introduces Windows, which enables multitasking via a graphical user interface. In
addition, the programming language C++ has been released.
1990 – Tim Berners-Lee, an English programmer and scientist, creates HyperText Markup Language,
widely known as HTML. He also coined the term “WorldWideWeb.” It includes the first browser, a
server, HTML, and URLs.
1993 – The Pentium CPU improves the usage of graphics and music on personal computers.
1995 – Microsoft’s Windows 95 operating system was released. A $300 million promotional
campaign was launched to get the news out. Sun Microsystems introduces Java 1.0, followed by
Netscape Communications’ JavaScript.
1996 – At Stanford University, Sergey Brin and Larry Page created the Google search engine.
1998 – Apple introduces the iMac, an all-in-one Macintosh desktop computer. These PCs cost $1,300
and came with a 4GB hard drive, 32MB RAM, a CD-ROM, and a 15-inch monitor.
1999 – Wi-Fi, an abbreviation for “wireless fidelity,” is created, originally covering a range of up to
300 feet.

21st Century
2000 – The USB flash drive is first introduced in 2000. They were speedier and had more storage
space than other storage media options when used for data storage.
2001 – Apple releases Mac OS X, later renamed OS X and eventually simply macOS, as the successor
to its conventional Mac Operating System.
2003 – Customers could purchase AMD’s Athlon 64, the first 64-bit CPU for consumer computers.
2004 – Facebook began as a social networking website.
2005 – Google acquires Android, a mobile phone OS based on Linux.
2006 – Apple’s MacBook Pro was available. The Pro was the company’s first dual-core, Intel-based
mobile computer.

27 | G. Hemasundara Rao, M.C.A.,


Amazon Web Services, including Amazon Elastic Cloud 2 (EC2) and Amazon Simple Storage Service,
were also launched (S3)
2007 – The first iPhone was produced by Apple, bringing many computer operations into the palm of
our hands. Amazon also released the Kindle, one of the first electronic reading systems, in 2007.
2009 – Microsoft released Windows 7.
2011 – Google introduces the Chromebook, which runs Google Chrome OS.
2014 – The University of Michigan Micro Mote (M3), the world’s smallest computer, was constructed.
2015 – Apple introduces the Apple Watch. Windows 10 was also released by Microsoft.
2016 – The world’s first reprogrammable quantum computer is built.

Types of Computers :

1. Analog Computers – Analog computers are built with various components such as gears and
levers, with no electrical components. One advantage of analogue computation is that
designing and building an analogue computer to tackle a specific problem can be quite
straightforward.
2. Digital Computers – Information in digital computers is represented in discrete form,
typically as sequences of 0s and 1s (binary digits, or bits). A digital computer is a system or
gadget that can process any type of information in a matter of seconds. Digital computers are
categorized into many different types. They are as follows:
a. Mainframe computers – It is a computer that is generally utilized by large enterprises for
mission-critical activities such as massive data processing. Mainframe computers were
distinguished by massive storage capacities, quick components, and powerful
computational capabilities. Because they were complicated systems, they were managed
by a team of systems programmers who had sole access to the computer. These machines
are now referred to as servers rather than mainframes.
b. Supercomputers – The most powerful computers to date are commonly referred to as
supercomputers. Supercomputers are enormous systems that are purpose-built to solve
complicated scientific and industrial problems. Quantum mechanics, weather forecasting,
oil and gas exploration, molecular modelling, physical simulations, aerodynamics, nuclear
fusion research, and cryptoanalysis are all done on supercomputers.
c. Minicomputers – A minicomputer is a type of computer that has many of the same
features and capabilities as a larger computer but is smaller in size. Minicomputers, which
were relatively small and affordable, were often employed in a single department of an
organization and were often dedicated to a specific task or shared by a small group.
d. Microcomputers – A microcomputer is a small computer that is based on a
microprocessor integrated circuit, often known as a chip. A microcomputer is a system
that incorporates at a minimum a microprocessor, program memory, data memory, and
input-output system (I/O). A microcomputer is now commonly referred to as a personal
computer (PC).
e. Embedded processors – These are miniature computers that control electrical and
mechanical processes with basic microprocessors. Embedded processors are often simple
in design, have limited processing capability and I/O capabilities, and need little power.
Ordinary microprocessors and microcontrollers are the two primary types of embedded
processors. Embedded processors are employed in systems that do not require the
28 | G. Hemasundara Rao, M.C.A.,
computing capability of traditional devices such as desktop computers, laptop computers,
or workstations.

Computer Memory :
A computer is a device that is electronic and that accepts data, processes that data, and
gives the desired output. It performs programmed computation with great accuracy & higher
speed. Or in other words, the computer takes data as input and stores the data/instructions in the
memory (use them when required). It then processes the data and converts it into useful
information. Finally, it gives the output.

What is Memory :
Computer memory is just like the human brain. It is used to store data/information and
instructions. It is a data storage unit or a data storage device where data is to be processed and
instructions required for processing are stored. It can store both the input and output can be
stored here.

Characteristics of Main Memory:


 It is faster computer memory as compare to secondary memory.
 It is semiconductor memories.
 It is usually a volatile memory.
 It is the main memory of the computer.
 A computer system cannot run without primary memory.

In general, memory is of three types:

 Primary memory
 Secondary memory
 Cache memory

1. Primary Memory: It is also known as the main memory of the computer system. It is used
to store data and programs or instructions during computer operations. It uses
semiconductor technology and hence is commonly called semiconductor memory. Primary
memory is of two types:

(i) RAM (Random Access Memory): It is a volatile memory. Volatile memory stores information
based on the power supply. If the power supply fails/ interrupted/stopped, all the data &
information on this memory will be lost. RAM is used for booting up or start the computer. It
temporarily stores programs/ data which has to be executed by the processor. RAM is of two
types:

 S RAM (Static RAM): It uses transistors and the circuits of this memory are capable of
retaining their state as long as the power is applied. This memory consists of the number of flip
flops with each flip flop storing 1 bit. It has less access time and hence, it is faster.
 D RAM (Dynamic RAM): It uses capacitors and transistors and stores the data as a charge on
the capacitors. They contain thousands of memory cells. It needs refreshing of charge on
capacitor after a few milliseconds. This memory is slower than S RAM.

29 | G. Hemasundara Rao, M.C.A.,


(ii) ROM (Read Only Memory): It is a non-volatile memory. Non-volatile memory stores
information even when there is a power supply failed/ interrupted/stopped. ROM is used to store
information that is used to operate the system. As its name refers to read-only memory, we can
only read the programs and data that is stored on it. It contains some electronic fuses that can be
programmed for a piece of specific information. The information stored in the ROM in binary
format. It is also known as permanent memory. ROM is of four types

 MROM(Masked ROM): Hard-wired devices with a pre-programmed collection of data or


instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in this
way.
 PROM (Programmable Read Only Memory): This read-only memory is modifiable once by
the user. The user purchases a blank PROM and uses a PROM program to put the required
contents into the PROM. Its content can’t be erased once written.

 EPROM (Erasable Programmable Read Only Memory): It is an extension to PROM where


you can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40 minutes.

 EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written
contents can be erased electrically. You can delete and reprogramme EEPROM up to 10,000
times. Erasing and programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any
area in an EEPROM can be wiped and programmed selectively.

2. Secondary Memory: It is also known as auxiliary memory and backup memory. It is a non-
volatile memory and used to store a large amount of data or information. The data or information
stored in secondary memory is permanent, and it is slower than primary memory. A CPU cannot
access secondary memory directly. The data/information from the auxiliary memory is first
transferred to the main memory, and then the CPU can access it.

Characteristics of Secondary Memory :

 It is a slow memory but reusable.


 It is a reliable and non-volatile memory.
 It is cheaper than primary memory.
 The storage capacity of secondary memory is large.
 A computer system can run without secondary memory.
 In secondary memory, data is stored permanently even when the power is off.

Types of secondary memory :

(i) Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic
coating on it that is used for magnetic recording. Bits are recorded on tape as magnetic patches
called RECORDS that run along many tracks. Typically, 7 or 9 bits are recorded concurrently. Each
track has one read/write head, which allows data to be recorded and read as a sequence of
characters. It can be stopped, started moving forward or backward, or rewound.

(ii) Magnetic Disks: A magnetic disc is a circular metal or a plastic plate and these plates are
coated with magnetic material. The disc is used on both sides. Bits are stored in magnetized

30 | G. Hemasundara Rao, M.C.A.,


surfaces in locations called tracks that run in concentric rings. Sectors are typically used to break
tracks into pieces.

Hard discs are discs that are permanently attached and cannot be removed by a single user.

(iii) Optical Disks: It’s a laser-based storage medium that can be written to and read. It is
reasonably priced and has a long lifespan. The optical disc can be taken out of the computer by
occasional users. Types of Optical Disks :

(a) CD – ROM:
 It’s called Compact Disk. Only read from memory.
 Information is written to the disc by using a controlled laser beam to burn pits on the disc
surface.
 It has a highly reflecting surface, which is usually aluminum.
 The diameter of the disc is 5.25 inches.
 16000 tracks per inch is the track density.
 The capacity of a CD-ROM is 600 MB, with each sector storing 2048 bytes of data.
 The data transfer rate is about 4800KB/sec. & the new access time is around 80 milliseconds.

(b) WORM-(WRITE ONCE READ MANY):


 A user can only write data once.
 The information is written on the disc using a laser beam.
 It is possible to read the written data as many times as desired.
 They keep lasting records of information but access time is high.
 It is possible to rewrite updated or new data to another part of the disc.
 Data that has already been written cannot be changed.
 Usual size – 5.25 inch or 3.5 inch diameter.
 The usual capacity of 5.25 inch disk is 650 MB,5.2GB etc.

(c) DVDs:
 The term “DVD” stands for “Digital Versatile/Video Disc,” and there are two sorts of DVDs:
(i)DVDR (writable) and (ii) DVDRW (Re-Writable)

 DVD-ROMS (Digital Versatile Discs): These are read-only memory (ROM) discs that can be used
in a variety of ways. When compared to CD-ROMs, they can store a lot more data. It has a thick

31 | G. Hemasundara Rao, M.C.A.,


polycarbonate plastic layer that serves as a foundation for the other layers. It’s an optical
memory that can read and write data.

 DVD-R: It is a writable optical disc that can be used just once. It’s a DVD that can be recorded.
It’s a lot like WORM. DVD-ROMs have capacities ranging from 4.7 to 17 GB. The capacity of 3.5
inch disk is 1.3 GB.

3. Cache Memory: It is a type of high-speed semiconductor memory that can help the CPU run
faster. Between the CPU and the main memory, it serves as a buffer. It is used to store the data and
programs that the CPU uses the most frequently.

Advantages of cache memory:


 It is faster than the main memory.
 When compared to the main memory, it takes less time to access it.
 It keeps the programs that can be run in a short amount of time.
 It stores data in temporary use.

Disadvantages of cache memory:


 Because of the semiconductors used, it is very expensive.
 The size of the cache (amount of data it can store) is usually small.

Units of Computer Memory :

IEC UNITS :

32 | G. Hemasundara Rao, M.C.A.,


COMPUTER SOFTWARE : TYPES OF SOFTWARES :

COMPUTER HARDWARE :

Introduction :
Computer Hardware is the physical part of a computer, as distinguished from the computer
software that executes or runs on the hardware. The hardware of a computer is infrequently
changed, while software and data are modified frequently.

Motherboard

The motherboard is the body or mainframe of the computer, through which all other
components interface. It is the central circuit board making up a complex electronic system. A
motherboard provides the electrical connections by which the other components of the system
communicate. The mother board includes many components such as: central processing unit (CPU),
random access memory (RAM), firmware, and internal and external buses.

33 | G. Hemasundara Rao, M.C.A.,


Central Processing Unit :

The Central Processing Unit (CPU; sometimes just called processor) is a machine that can
execute computer programs. It is sometimes referred to as the brain of the computer.

There are four steps that nearly all CPUs use in their operation: fetch, decode, execute,
and write back. The first step, fetch, involves retrieving an instruction from program memory. In the
decode step, the instruction is broken up into parts that have significance to other portions of the
CPU. During the execute step various portions of the CPU, such as the arithmetic logic unit (ALU) and

34 | G. Hemasundara Rao, M.C.A.,


the floating point unit (FPU) are connected so they can perform the desired operation. The final step,
write back, simply writes back the results of the execute step to some form of memory.

Random Access Memory :


Random access memory (RAM) is fast-access
memory that is cleared when the computer is power-
down. RAM attaches directly to the motherboard,
and is used to store programs that are currently
running. RAM is a set of integrated circuits that allow
the stored data to be accessed in any order (why it is
called random). There are many different types of
RAM. Distinctions between these different types
include: writable vs. read-only, static vs. dynamic,
volatile vs. non-volatile, etc.

Firmware :
Firmware is loaded from the Read only
memory (ROM) run from the Basic Input-Output
System (BIOS). It is a computer program that is
embedded in a hardware device, for example a microcontroller. As it name suggests, firmware is
somewhere between hardware and software. Like software, it is a computer program which is
executed by a microprocessor or a microcontroller. But it is also tightly linked to a piece of
hardware, and has little meaning outside of it. Most devices attached to modern systems are special-
purpose computers in their own right, running their own software.

Power Supply :
The power supply as its name might suggest is the device that supplies power to all the
components in the computer. Its case holds a transformer, voltage control, and (usually) a cooling
fan. The power supply converts about 100-120 volts of AC power to low-voltage DC power for the
internal components to use. The most common computer power supplies are built to conform with
the ATX form factor. This enables different power supplies to be interchangeable with different
components inside the computer. ATX power supplies also are designed to turn on and off using a
signal from the motherboard, and provide support for modern functions such as standby mode.

Removable Media Devices :


If your putting something in your computer and taking it out is most likely a form of
removable media. There are many different removable media devices. The most popular are
probably CD and DVD drives which almost every computer these days has at least one of. There are
some new disc drives such as Blu-ray which can hold a much larger amount of information then
normal CDs or DVDs.

CD :
CDs are the most common type of removable media. They are inexpensive but also have
short life-span. There are a few different kinds of CDs. CD-ROM which stands for Compact Disc read-
only memory are popularly used to distribute computer software although any type of data can be
stored on them. CD-R is another variation which can only be written to once but can be read many
times. CD-RW (rewritable) can be written to more than once as well as read more than once. Some
35 | G. Hemasundara Rao, M.C.A.,
other types of CDs which are not as popular include Super Audio CD (SACD), Video Compact Discs
(VCD), Super Video Compact Discs (SVCD), PhotoCD, PictureCD, CD-i, and Enhanced CD.

CD-ROM Drive :
There are two types of devices in a computer that use CDs: CD-ROM drive and a CD writer.
The CD-ROM drive used for reading a CD. The CD writer drive can read and write a CD. CD writers
are much more popular are new computers than a CD-ROM drive. Both kinds of CD drives are called
optical disc drives because the use a laser light or electromagnetic waves to read or write data to or
from a CD.

DVD :
DVDs (Digital Versatile Discs) are another popular optical
disc storage media format. The main uses for DVDs are video and
data storage. Most DVDs are of the same dimensions as compact
discs. Just like CDs there are many different variations. DVD-ROM
has data which can only be read and not written. DVD-R and
DVD+R can be written once and then function as a DVD-ROM. DVD-
RAM, DVD-RW, or DVD+RW hold data that can be erased and re-
written multiple times. DVD-Video and DVD-Audio discs
respectively refer to properly formatted and structured video and
audio content. The devices that use DVDs are very similar to the
devices that use CDs. There is a DVD-ROM drive as well as a DVD
writer that work the same way as a CD-ROM drive and CD writer.
There is also a DVD-RAM drive that reads and writes to the DVD-RAM variation of DVD.

Blu-ray
Blu-ray is a newer optical disc storage media format. Its main uses are high-definition video
and data storage. The disc has the same dimensions as a CD or DVD. The term “Blu-ray” comes from
the blue laser used to read and write to the disc. The Blu-ray discs can store much more data then
CDs or DVDs. A dual layer Blu-ray disc can store up to 50GB, almost six times thecapacity of a dual
layer DVD (WOW!). Blu-ray discs have similar devices used to read them and write to them as CDs
have. A BD-ROM drive can only read a Blu-ray disc and a BD writer can read and write a Blu-ray disc.

Floppy Disk
36 | G. Hemasundara Rao, M.C.A.,
A floppy disk is a type of data storage that is composed of a
disk of thin, flexible(“floppy”) magnetic storage medium encased in a
square or rectangular plastic shell. Floppy disks are read and written
by a floppy disk drive. Floppy disks are a dying and being replaced by
the optical and flash drives. Many new computers do not come with
floppy drives anymore but there are a lot of older ones with floppy
drives lying around. While floppy disks are very cheap the amount of
storage on them compared to the amount of storage for the price of
flash drives makes floppy disks unreasonable to use.

Internal Storage :

Internal storage is hardware that keeps data inside the computer for later use and remains
persistent even when the computer has no power. There are a few different types of internal
storage. Hard disks are the most popular type of internal storage. Solid-state drives have grown in
popularity slowly. A disk array controller is popular when you need more storage then a single har
disk can hold.

Hard Disk Drive :


A hard disk drive (HDD) is a non-volatile storage device which stores digitally encoded data
on rapidly rotating platters with magnetic surfaces. Just about every new computer comes with a
hard disk these days unless it comes with a new solid-state drive. Typical desktop hard disk drives
store between 120 and 400GB, rotate at 7,200 rpm, and have a madia transfer rate of 1 Gbit/s or
higher. Hard disk drives are accessed over one of a number of bus types, including parallel ATA(also
called IDE), Serial ATA (SATA), SCSI, Serial Attached SCSI, and Fibre Channel.

37 | G. Hemasundara Rao, M.C.A.,


Switched Mode Power Supply [SMPS] :
The disadvantages of LPS such as lower efficiency, the need for large value of capacitors to
reduce ripples and heavy and costly transformers etc. are overcome by the implementation
of Switched Mode Power Supplies.
The working of SMPS is simply understood by knowing that the transistor used in LPS is
used to control the voltage drop while the transistor in SMPS is used as a controlled switch.
The working of SMPS :

Input Stage :
The AC input supply signal 50 Hz is given directly to the rectifier and filter circuit
combination without using any transformer. This output will have many variations and the
capacitance value of the capacitor should be higher to handle the input fluctuations. This
unregulated dc is given to the central switching section of SMPS.

Switching Section
A fast switching device such as a Power transistor or a MOSFET is employed in this section,
which switches ON and OFF according to the variations and this output is given to the primary of
the transformer present in this section. The transformer used here are much smaller and lighter
ones unlike the ones used for 60 Hz supply. These are much efficient and hence the power
conversion ratio is higher.

Output Stage :
The output signal from the switching section is again rectified and filtered, to get the required DC
voltage. This is a regulated output voltage which is then given to the control circuit, which is a feedback
circuit. The final output is obtained after considering the feedback signal.

Control Unit
This unit is the feedback circuit
which has many sections. Let us have a
clear understanding about this from The
following figure.

The output sensor senses the


signal and joins it to the control unit.
The signal is isolated from the other

38 | G. Hemasundara Rao, M.C.A.,


section so that any sudden spikes should not affect the circuitry. A reference voltage is given as
one input along with the signal to the error amplifier which is a comparator that compares the
signal with the required signal level.
By controlling the chopping frequency the final voltage level is maintained. This is
controlled by comparing the inputs given to the error amplifier, whose output helps to decide
whether to increase or decrease the chopping frequency. The PWM oscillator produces a standard
PWM wave fixed frequency.

Functioning of SMPS :
The SMPS is mostly
used where switching of
voltages is not at all a problem
and where efficiency of the
system really matters. There
are few points which are to be
noted regarding SMPS.
SMPS circuit is operated by
switching and hence the
voltages vary continuously.
 The switching device is
operated in saturation or cut off mode.
 The output voltage is controlled by the switching time of the feedback circuitry.
 Switching time is adjusted by adjusting the duty cycle.
 The efficiency of SMPS is high because, instead of dissipating excess power as heat, it
continuously switches its input to control the output.

Disadvantages :
There are few disadvantages in SMPS, such as
 The noise is present due to high frequency switching.
 The circuit is complex.
 It produces electromagnetic interference.

Advantages :
The advantages of SMPS include,
 The efficiency is as high as 80 to 90%
 Less heat generation; less power wastage.
 Reduced harmonic feedback into the supply mains.
 The device is compact and small in size.
 The manufacturing cost is reduced.
 Provision for providing the required number of voltages.

Applications :
There are many applications of SMPS. They are used in the motherboard of computers,
mobile phone chargers, HVDC measurements, battery chargers, central power distribution, motor
vehicles, consumer electronics, laptops, security systems, space stations, etc.

39 | G. Hemasundara Rao, M.C.A.,


Types of SMPS :
SMPS is the Switched Mode Power Supply circuit which is designed for obtaining the regulated DC
output voltage from an unregulated DC or AC voltage. There are four main types of SMPS such as

 DC to DC Converter
 AC to DC Converter
 Fly back Converter
 Forward Converter
The AC to DC conversion part in the input section makes the difference between AC to DC
converter and DC to DC converter. The Fly back converter is used for Low power applications. Also
there are Buck Converter and Boost converter in the SMPS types which decrease or increase the
output voltage depending upon the requirements. The other type of SMPS include Self-oscillating
fly-back converter, Buck-boost converter, Cuk, Sepic, etc.

Types of Monitors :

 Cathode Ray Tube (CRT) Monitors. It is a technology used in early monitors. ...
 Flat Panel Monitors. These types of monitors are lightweight and take less space. ...
 Touch Screen Monitors. These monitors are also known as an input device. ...
 LED Monitors. ...
 OLED Monitors. ...
 DLP Monitors. ...
 TFT Monitors. ...
 Plasma Screen Monitors.

Different Types of Mouse for Your Computer :

There are many different types of computer mice, but how do you decide which one is right for
your needs? Read our buyer guide to find the best mouse for you.

 Wired Mouse
 Wireless Mouse
 Bluetooth Mouse
 Comparing Wired vs. Wireless vs. Bluetooth Mice
 Trackball Mouse
 Laser Mouse
 Comparing Trackball vs. Optical vs. Laser Mice
 Magic Mouse
 USB Mouse
 Vertical Mouse
 Gaming Mouse

Wired Mouse :
A wired mouse connects directly to your desktop or laptop, usually through a USB port, and
transmits information via the cord. The cord connection provides several key advantages. For

40 | G. Hemasundara Rao, M.C.A.,


starters, wired mice provide fast response times, as the data is transmitted directly through the
cable. They also tend to be more accurate than other designs.

Wireless Mouse
Wireless mice transmit radio signals to a receiver connected to your computer. The
computer accepts the signal and decodes how the cursor was moved or what buttons were clicked.
While the freedom or range with wireless models is convenient, there are some drawbacks. The
decoding process.

Bluetooth Mouse
Wireless mouse designs and Bluetooth mouse designs tend to look very similar, as neither
need a wired connection to operate. Most wireless mice models use a dongle that connects to your
PC, and the mouse communicates back and forth in that manner. A Bluetooth mouse, however,
utilizes an internal Bluetooth connection on your PC, allowing you to connect the mouse to
multiple devices at a time.

Monitor :
A monitor is an electronic output device that is also known as
a video display terminal (VDT) or a video display unit (VDU). It is
used to display images, text, video, and graphics information
generated by a connected computer via a computer's video card.
Although it is almost like a TV, its resolution is much higher than a
TV. The first computer monitor was introduced on 1 March 1973,
which was part of the Xerox Alto computer system.

Older monitors were built by using a fluorescent screen and Cathode Ray Tube (CRT), which made
them heavy and large in size and thus causing them to cover more space on the desk. Nowadays,
all monitors are made up by using flat-panel display technology, commonly backlit with LEDs.
These modern monitors take less space on the desk as compared to older CRT displays.

History of Monitors
o In 1964, the Uniscope 300 machine included a built-in CRT display, which was not a true
computer monitor.
o A. Johnson invented the touch screen technology in 1965.
o On 1 March 1973, Xerox Alto computer was introduced, which had the first computer
monitor. This monitor included a monochrome display and used CRT technology.
o In 1975, George Samuel Hurst introduced the first resistive touch screen display, although
it was used only before 1982.
o In 1976, the Apple I and Sol-20 computer systems were introduced. These systems had a
built-in video port that allowed them to run a video screen on computer monitor.
o In 1977, James P. Mitchell invented LED display technology. But even 30 years later, these
monitors were not easily available to buy in the market.
o In June 1977, the Apple II was released, allowing for color display on a CRT monitor.
o In 1987, IBM released the IBM 8513, first VGA monitor.
o In 1989, VESA defined the SVGA standard for the display of computers.

41 | G. Hemasundara Rao, M.C.A.,


o In the late-1980s, the color CRT monitors were able to support 1024 x 768 resolution
display.
o Eizo Nanao manufactured the Eizo L66, the first LCD monitors for desktop computers, and
released it in the middle-1990s.
o In 1997, the color LCD monitors were started developing by IBM, Viewsonic, and Apple that
provide better quality and resolution than CRT monitors.
o In 1998, the color LCD monitors for desktop computers were manufactured by Apple.
o Later in 2003, CRT monitors outsell for the first time by LCD monitors. Till 2007, CRT
monitors consistently outsell by LCD monitors, so they become more popular computer
monitor.
o In 2006, Jeff Han released the first interface-free, touch-based monitor at TED.
o In 2009, the LED monitor MultiSync EA222WMe was released by NEC company. It was the
first monitor released by NEC.
o AMD and Intel announced to end support for VGA in December 2010.
o In 2017, touch screen LCD monitors became more affordable for the customers as they
started to decrease the price.

Types of Monitors : There are several types of monitors; some are as


follows :

1. Cathode Ray Tube (CRT) Monitors


It is a technology used in early monitors. It uses a beam of
electrons to create an image on the screen. It comprises the guns that
fire a beam of electrons inside the screen. The electron beams
repeatedly hit the surface of the screen. These guns are responsible for generating RGB (Red,
Green, Blue) colors, and more other colors can be generated with the help of combining these
three colors. Today's Flat Panel Monitors replace the CRT monitors.

2. Flat Panel Monitors


These types of monitors are lightweight and take less space.
They consume less power as compared to CRT monitors. These
monitors are more effective as they do not provide harmful radiation.
These monitors are more expensive than CRTs. The flat-panel monitors
are used in PDA, notebook computers, and cellular phones. These
monitors are available in various sizes like 15", 17", 18" & 19" and
more. The display of a flat-panel monitor is made with the help of two
plates of glass. These plates contain a substance, which is activated in
many ways.

Flat-panel monitor screens use two types of technologies, which are given below:
o Liquid Crystal Display: LCD (Liquid crystal display) screen contains a substance known as
liquid crystal. The particles of this substance are aligned in a way that the light located
backside on the screens, which allow to generate an image or block. Liquid crystal display

42 | G. Hemasundara Rao, M.C.A.,


offers a clear picture as compared to CRT display and emits less radiation. Furthermore, it
consumes less power and takes less space than a CRT display.
o Gas Plasma Display: This display uses gas plasma technology, which uses a layer of gas
between 2 plates of glass. When voltage is applied, the gas releases ultraviolet light. By this
ultraviolet light, the pixels on the screen glow and form an image. These displays are
available in different sizes of up to 150 inches. Although it offers effective colors as
compared to the LCD monitor, it is more expensive. That's why it is less used.

3. Touch Screen Monitors :


These monitors are also known as an input device. It
enables users to interact with the computer by using a finger
or stylus instead of using a mouse or keyboard. When users
touch the screen by their finger, it occurs an event and forward
it to the controller for processing. These types of screens
include pictures or words that help users to interact with the
computer. It takes input from the users by touching menus or
icons presented on the screen.

There are different types of touch screen monitors; three


common types are given below:
o Resistive Touch Screen: Generally, this screen includes a thin electrically conductive and
resistive layer of metal. When the touch is pressed, a change in the electrical current occurs
that is sent to the controller. Nowadays, these screens are widely in use. These monitors
are more reliable as they cannot be affected by liquids or dust.
o Surface Wave Touch Screens: These monitors process the input through ultrasonic
waves. When a user touches the screen, the wave is processed and absorbed by the
computer. It is less reliable as they can be damaged by water or dust.
o Capacitive Touch Screen: This screen includes a cover with an electrically-charged
material. This material continuously flows the current over the screen. It is mainly used by
the finger rather than a stylus. These monitors contain better clarity and do not damage by
dust. Nowadays, capacitive touch screen is mostly used in smart phones.

4. LED Monitors
It is a flat screen computer monitor, which stands for light-emitting diode display. It is
lightweight in terms of weight and has a short depth. As the source of light, it uses a panel of LEDs.
Nowadays, a wide number of electronic devices, both large and small devices such as laptop
screens, mobile phones, TVs, computer monitors, tablets, and more,
use LED displays.
It is believed that James P. Mitchell invented the
first LED display. On 18 March 1978, the first prototype of an LED
display was published to the market at the SEF (Science and
Engineering Fair) in Iowa. On 8 May 1978, it was shown again in
Anaheim California, at the SEF. This prototype received awards
from NASA and General Motors.

Advantages of LED Monitor:


43 | G. Hemasundara Rao, M.C.A.,
o It includes a broader dimming range.
o It is a more reliable monitor.
o It is often less expensive.
o It consumes less power (20 watts), and run on a lower temperature.
o It has a more dynamic contrast ratio.

Comparison between LCD and LED monitors:


Resolution 1920 x 1080 LCD Monitors Led Monitors

Brightness 250 cd / m2 250 cd / m2

Energy Star Certified No Yes

Weight 2.4 kg 2.4 kg

Contrast Ratio 12,000,000: 1 100,000,000: 1

5. OLED Monitors
It is a new flat light-emitting display technology, which is more
efficient, brighter, thinner, and better refresh rates feature and contrast as
compared to the LCD display. It is made up of locating a series of organic
thin films between two conductors. These displays do not need a backlight
as they are emissive displays. Furthermore, it provides better image quality
ever and used in tablets and high-end smartphones.

Nowadays, it is widely used in laptops, TVs, mobile phones, digital cameras, tablets, VR headsets.
The demand for mobile phone vendors, more than 500 million AMOLED screens were produced in
2018. The Samsung display is the main producer of the AMOLED screen. For example, Apple is
using AMOLED OLED panel made by SDC in its 2018 iPhone XS - a 5.8" 1125x2436. Additionally,
iPhone X is also using the same AMOLED display.

6. DLP Monitors
DLP stands for Digital Light Processing, developed by Texas
Instruments. It is a technology, which is used for presentations by
projecting images from a monitor onto a big screen. Before
developing the DLP, most of the computer projection systems
produced faded and blurry images as they were based on LCD
technology. DLP technology utilizes a digital micromirror device,
which is a tiny mirror housed on a special kind of microchip.

7. TFT Monitors
It is a type of LCD flat panel display, which stands for a thin-film
transistor. In TFT monitors, all pixels are controlled with the help of one

44 | G. Hemasundara Rao, M.C.A.,


to four transistors. The high-quality flat-panel LCDs use these transistors. Although the TFT-based
monitors provide better resolution of all the flat-panel techniques, these are highly expensive. The
LCDs, which use thin-film transistor (TFT) technology, are known as active-matrix displays. The
active-matrix displays offer higher quality as compared to older passive-matrix displays.

8. Plasma Screen Monitors


A plasma screen is a thin, flat-panel, and capable of hanging
on a wall like LCD and LED televisions. It is a brighter screen as
compared to LCD displays and thinner than CRT displays. It can be
used to either display modes of digital computer input or analog
video signals, and sometimes, it is marketed as 'thin-panel' displays.
Plasma displays have wide viewing angles, high contrast ratios, and
high refresh rates, which is used to reduce a blur video. Additionally,
it provides better quality pictures as it supports high resolutions of
up to 1920 x 1080.
The plasma screen also includes some disadvantages such as
the chance of screen burn-in, consumes more power, loss of
brightness with time, can be heavier in weight.

Types of monitor connector :


Computer monitors require one of the following kinds of connectors to connect with a computer.
o VGA
o Thunderbolt
o HDMI
o USB-C
o DVI
o DisplayPort

VGA: It is a popular display standard, stands for Video Graphics


Array or Video Graphics Adapter. It was introduced in 1987 after
being developed by IBM. It is used to connect a computer with a
projector, monitor, or TV. It offers a 640 x 480 resolution color display,
including 16 colors display and a refresh rate of 60 Hz at a time. If the
resolution is less than 320 x 200, it displays 256 colors. It is only able to
show lower quality, and lower resolutions display on the screens as it
uses analog signals. The VGA connector and cable are less found with
today's projectors, monitors, computers, and TVs. These connectors are
being replaced by HDMI and DVI cable and connectors.

Thunderbolt: It is a hardware interface, which was marketed under


the name Light Peak and developed by Intel in collaboration with
Apple. On 24 February 2011, it was first sold as part of a consumer
product. It is used for connecting peripheral devices such as a mouse,
keyboard, printer, scanner, and more to a computer. It is capable of
carrying DC power and has the ability to transfer the data on long-
distance over cheaper cables. The first two versions of Thunderbolt are
45 | G. Hemasundara Rao, M.C.A.,
capable of transferring the data at a rate of up to 20 Gb in a second. The 3rd iteration is capable to
use a USB Type-C connector and can transfer data at a rate of up to 40 Gb per second.

Materials are used to make a Thunderbolt cable :


The two types of Thunderbolt cables are available where one uses optical wiring, and
another uses copper wiring. Although Thunderbolt cables were designed to use as fiber optic
cables, those versions were released in fewer numbers. Copper wiring allows the cables to supply
power, and it is less expensive, that's why it was widely used. Afterall, intel intends to use the
power of copper wiring to provide faster bandwidth speeds of optical by combining both optical
and coper wiring.

HDMI: It is a cable and connector developed by several companies,


including Toshiba, Sony, Hitachi, and Philips. It stands for High
Definition Multimedia Interface. It has the ability to transmit the
high-bandwidth and high-quality streams of audio and video
between devices. It is used with Projector, HDTV, Blu-ray player, or
DVD player.

A single HDMI cable provides an easier way to connect two


devices together for transmitting audio and video signals by
replacing the three-composite audio/video cables. Furthermore, it is
able to transmit up to 8-channels of digital audio signals, including
enhanced, standard, and high-definition video signals. The HDMI
cable is available in various length of up to 50 feet. Although, it is not recommended to purchase a
cable of length more than 25 feet because it may occur a problem of signal loss or degradation.

USB-C: It is a plug and play interface, stands for Universal Serial Bus. It allows the computer to
communicate with peripheral and other devices. It is also able to send power to certain devices
like tablets and smart phones, including charging their batteries. In January 1996, the first version
of the Universal Serial Bus was released. Then, this technology was followed by Compaq, Intel,
Microsoft, and other companies.

Nowadays, there are several USB devices that can be connected to a computer such as Digital
Camera, Keyboard, Microphone, Mouse, Printer, Scanner, and more. Furthermore, USB connectors
are available in different shapes and sizes. The length of a USB cable used for high-speed devices
is 16 feet 5 inches (its maximum length), and 9 feet 10 inches is used for low-speed devices.

DVI: It is a video display interface, stands for Digital Visual Interface.


It is used to transmit Digital Visual Interface and display devices at high
2560 x 1600 resolutions. Computer monitors and projectors are the
common devices that use the DVI connection. It can also be used by
some TVs; however, HDMI is most common because only some DVI
cables have the ability to transmit audio signals.

The DVI connector supports one of three names on the basis of


the signals: DVI-D (support the only digital), DVI-A (support the only analog), or DVI-I (support
both analog and digital). If your GPU and monitor have the capability to support both VGA and
46 | G. Hemasundara Rao, M.C.A.,
DVI, it is suggested to use DVI cable. The DVI cable always provides picture quality at least equal to
VGA and better if possible.
Display Port: It is a digital audio and video interface that connects to a projector, monitor, or TV
cable. It is created by VESA. There are two types of connections in
DisplayPort one is standard, and the second is the Mini
DisplayPort. They have different size, but both connections types
are able to transmit identical signals. Nowadays, VGI, HDMI, and
DVI are the most common types of display ports.

Difference between LCD and LED

The below table contains several differences between LCD and


LED:

LCD LED

It stands for Liquid Crystal Display. Short for Light-Emitting Diodes.

LCD monitors are not a subset of LED LED monitors are subset of LCD monitors.
monitors.

It primarily uses fluorescent lights. It mainly uses light-emitting diodes.

In LCDs, usually fluorescent lights are Usually, light-emitting diodes are located
located at the backside of the screen. around the edges or backside of the screen.

LCDs are less energy efficient than LEDs LEDs are more energy-efficient and are
and are thicker in size. much thinner in size as compared to LCDs.

Its resolution is low. Its resolution is high.

Its contrast ratio is high. Its contrast ratio is low.

Direct current can reduce the span life of Direct current does not have any effect on
LCDs. LEDs.

LCDs display area is large. LEDs display area is small.

The switching time of LCD is slow. The switching time of LED is fast.

SMPS: Switched-Mode Power Supply/ Switching Mode


Power Supply

SMPS stands for Switched-Mode Power Supply. It is an


electronic power supply that uses a switching regulator to
47 | G. Hemasundara Rao, M.C.A.,
convert electrical power efficiently. It is also known as Switching Mode Power Supply. It is power
supply unit (PSU) generally used in computers to convert the voltage into the computer acceptable
range.

This device has the power handling electronic components that converts electrical power
efficiently. Switched Mode Power Supply uses a great power conversion technique to reduce
overall power loss.

SMPS working :
The SMPS device uses switching regulators that switches the load current on and off to
regulate and stabilize the output voltage. The average of the voltage between the off and on
produces the appropriate power for a device. Unlike the linear power supply, the pass transistor
of SMPS switches between low dissipation, full-on and full-off mode, and spends very less time in
the high-dissipation transitions, which minimizes wasted energy

Switched-Mode Power Supply (SMPS) – Definition


• Switched-Mode Power Supply (SMPS) is an electronic circuit which converts the power using
switching devices that are turned on and off at high frequencies, and storage components such as
indicators or capacitors to supply power when the switching device is in its non-conduction state.
It can be abbreviated as SMPS.

• The switched-mode power supply is also called switch-mode power supply or switching-mode
power supply. Its efficiency is high. That’s why we use it in the variety of electronic types of
equipment which require a stable and efficient power supply.
• We can classify switched-mode power supply by the type of the input and output voltages.
The four major categories are as follows:
• AC to DC
• DC to DC
• DC to AC
• AC to AC
Working of switched-mode power supply

• Input rectifier stage: we make use of this stage to convert AC into DC, and the circuit which has
DC input does not require this stage. In this, the rectifier produces unregulated DC. We pass this
unregulated DC through the filter.
• Inverter stage: This stage converts DC into AC by running it through a power oscillator. The DC
supply can come either directly from the input or from the rectifier stage which is explained
above. The output transformer of power oscillator is tiny with few winding at a frequency of 10
or 100 KHz.

• Output transformer: if we want to isolate the output from the input, the inverted AC is used to
draw the primary winding of a high-frequency transformer. It converts the voltage up or down to
the required output level on its secondary winding.

48 | G. Hemasundara Rao, M.C.A.,


• Output rectifier: if we want the DC output, then the AC output from the transformer is
rectified.

• Regulation: in this, the output voltage is monitored by the feedback circuit and then compares
it with the reference voltage.
Classification of the switched-mode power supply

We can classify the switched-mode power supply by circuit topology. It can be of two types:
isolated and non-isolated typologies.
• Isolated typologies: This type of topology includes a transformer. Thus, it can produce an
output of higher or lower voltage than the input by adjusting the turns ratio. For some typologies,
we can place multiple winding on the transformer so that it can produce various output voltages.
While some converters make use of a transformer for the storage of energy, while others make
use of a separate inductor.
Various isolated typologies are as follows:
• Fly-back converter
• Forward converter
• Push-pull converter
• Half-bridge converter
• Full-bridge converter

• Non-isolated typologies: This type of topology make use of non-isolated converters, which are
the simplest. There are three basic types of non-isolated converters which make use of a single
inductor for storage of energy. In this, we assume the input voltage to be higher than 0. If it is
negative, we will negate the output voltage.
Various types of non-isolated typologies are as follows:
• Buck topology
• Boost topology
• Buck-Boost topology
• Split-pi topology
• SEPIC topology
• Cuk topology
Advantages of the switched-mode power supply

• Efficiency: in this, the little energy is dissipated in the form of heat as switching action is there
in this supply. That’s why its efficiency is high which is from 68% to 90%.
• Compact: the size of the switched-mode power supply is small. So, they can be made more
compact.

• Flexible technology: we can make use of this technology to provide high-efficiency voltage
conversions in voltage step up or “Boost” applications or step down or “Buck” applications.

49 | G. Hemasundara Rao, M.C.A.,


• Its power density is high.
• It has regulated and reliable outputs instead of variations in input supply voltage.
• Their weight is also less as compared to other linear power supplies.
• It has wide ac input voltage range.
Disadvantages of the switched-mode power supply

• Noise: The biggest problem of the switched-mode power supply is the transient spikes that
occur in the switching action. These spikes can cause electromagnetic interference which can
affect other electronic types of equipment that are nearby.

• External components: we can also design a switch mode regulator using a single integrated
circuit, typically there is the requirement of external components. In some of the designs, the
series switch element may be incorporated within the integrated circuit, but where any current is
consumed, the series switch will be an external component. These external components require
space and add to the cost

• Expert design needed: There is the requirement of some experts so that it can perform the
necessary specifications.
• Prices: we have to make the careful considerations of the costs of a switched-mode power
supply before designing the system. If the additional filtration is required, then it adds to the
value of the system.
Applications of the switched-mode power supply

• It is used in machine-tool industries.


• It is used for security systems.
• It is used in personal computers.
• It is used in closed circuit cameras.
• It is used in mobile phone chargers.
• It is used to support supplies with PLC’s.
Advantages Of SMPS (Switched Mode Power Supply) Circuit
 Highly-efficient with lower levels of energy being dissipated as heat.
 Useful in voltage step up and step down applications as they provide with high-efficiency
voltage conversions.

COMPONENTS AND CIRCUITS INSIDE


THE SMPS UNIT :

50 | G. Hemasundara Rao, M.C.A.,


Design & Working
The working & design of SMPS is divided into various sections and stages.

1: Input Stage
The AC input supply of frequency (50-60) Hz feds directly to the rectifier and
filter circuit. Its output contains many variations and the capacitance value of the capacitor
should be higher enough to handle the input fluctuations. Finally, the unregulated dc is given to
the central switching section of SMPS in order to regulate it. This section does not contain any
transformer for the step down in input voltage supply.

2: Switching Section
It consists of fast switching devices like a Power transistor or a MOSFET, which switches
ON and OFF according to the variations in the voltage. The output obtained is given to the primary
of the transformer which is present in this section.

The transformer used here is a much smaller, lighter, and highly effective one that steps down
voltage. These are much efficient compared to other step-down methods. Hence, the power
conversion ratio is higher.

3: Output Stage
The output that is derived from the switching section is again rectified and filtered. It uses a
rectification and filter circuit to get the desired DC voltage. The obtained regulated output
voltage is then given to the control circuit.

4: Control Unit
This unit is all about feedback, which has many sections contain in it. Lets see the brief
information about this section.

The inner control unit consists of an oscillator, amplifier, sensor, etc. The sensor senses
the output signal and feedback to the control unit. All the signals are isolated from each other so
that, any sudden spikes should not affect the circuitry. The reference voltage is given as one
51 | G. Hemasundara Rao, M.C.A.,
input along with the signal to the error amplifier. The amplifier is a comparator that compares
the signal with the required signal level.

The next stage is Controlling the chopping frequency. The final voltage level is controlled
by comparing the inputs given to the error amplifier, whose output helps to decide whether to
increase or to decrease the chopping frequency. The oscillator produces a standard PWM wave
with a fixed frequency.

The SMPS is mostly used where switching of voltages is not at all a problem, but where the
efficiency of the system really matters. The design and working of SMPS of based on the same
concept.

TYPES OF U.P.S [Uninterruptable Power Supply Systems] :

UPS systems can generally be classified as being one of these five types:

 Standby UPS
 Line-interactive UPS
 Standby-ferro UPS
 Double conversion online UPS
 Delta conversion online UPS

Note that these types are based on a demand for an AC power backup for the load.

Standby UPS
A standby UPS is a configuration in which a battery backup is charged by the line voltage
and is fed through an inverter to a transfer switch. When the prime power is lost, the transfer
switch brings the standby power path online (represented in figure 1 below as the lower path
with the dashed line). The inverter is generally not active until there is a power failure, and hence
the term ‘standby” is used to describe this type of UPS.

52 | G. Hemasundara Rao, M.C.A.,


The laptop example that was presented earlier might be considered a simplified type of
standby UPS where the desired output is DC instead of AC and no transfer switch is needed.

Line-interactive UPS
One of the most commonly used designs for an
uninterruptable power supply is the line-interactive UPS,
presented in Figure 2 below. With the line interactive
design, prime power is fed through a transfer switch to
an inverter and then out to the load. The inverter in this
design is always active and when prime power is on, it
operates in reverse to convert incoming AC power to DC
which is used to keep the backup battery charged. If the
line power goes out the transfer switch opens and the
inverter works in the normal direction, taking the DC
power from the battery and converting it to AC to supply to the load.
Depending on the inverter design, this configuration can provide two independent power
paths for the load and eliminates the inverter as a single point of failure. So even if the inverter
were to fail, AC power can still flow to the output. This type of UPS offers low cost, high reliability,
and high efficiency, and can support low or high voltage applications.

Standby-ferro UPS
The standby-ferro UPS uses a three winding transformer to couple the load to the power
source, as shown below in Figure 3. Prime power flows through a transfer switch that is normally
closed to the coils in the transformer where it couples to the secondary coil of the transformer and
then supplies the power to the output load. The backup power path takes line voltage to a battery
charger and maintains the backup battery, which then connects to an inverter that joins the third
coil of the transformer.

When the prime power fails, the transfer switch opens and the inverter supplies power to
the load from the backup battery. In this design configuration, the inverter is in standby and
becomes active when the prime power fails, and the transfer switch is opened.

The transformer, while providing isolation of the load from line voltage transients, can
create output voltage distortion and transients of its own possibly worse than those from a poor
AC connection. Additionally, the inefficiencies of the ferro transformer may result in the
53 | G. Hemasundara Rao, M.C.A.,
generation of a significant amount of heat, on top of which they are quite large and heavy, making
standby-ferro UPS systems also bulky as a result.

Double-conversion online UPS

For applications above 10kVA, the double


conversion online UPS is often the configuration of choice.
Diagramed in Figure 4 below, the double conversion
online UPS is similar to that of the standby UPS except that
the inverter output represents the primary power path
whereas in the standby UPS this was the secondary or
backup path. The AC prime power main feeds a rectifier
(AC to DC converter) and then is fed right back to the
inverter that regenerates AC power from the DC power. A backup battery ties into the DC line and
is charged by the rectifier.

A static bypass switch is available but is not activated in the event of failure of the AC prime
power. The battery power will seamlessly feed the inverter should the AC mains fail, resulting in a
design that results in no transfer time in the event of a power loss. Because the inverter and
rectifier are continuously active in this design, there is reduced reliability of the electrical
components versus other designs. But from the perspective of the electrical power, this type of
UPS delivers ideal power output performance.

Delta conversion online UPS

The delta conversion online UPS is a relatively


new design that was introduced to address some
of the drawbacks associated with the double
conversion online UPS discussed previously. As
with the double-conversion design, the delta
conversion online UPS has the inverter supplying
the output power to the load and hence is always
operating.

A delta transformer couples the AC mains


to the delta converter, which generates a DC power output. As with the double-conversion design,
the DC output serves to maintain the charge on a backup battery and also feed the inverter, which
then produces an AC output that is transmitted to the load. The prime power also has a feed that
meets up with the inverter output.

Working Principle and Types – Offline and Online UPS Systems

UPS stands for the Uninterrupted Power Source. As the name implies, it is used to provide a
continuous power supply to the load using an automatic switching method; to prevent the device
and equipment from damage or preventing the plant from going into a shutdown mode. There are
many devices that require a safe shutdown for proper operation; otherwise, sudden power loss
can damage the equipment.

54 | G. Hemasundara Rao, M.C.A.,


A UPS takes AC supply, stores it in batteries and these batteries then feed the power back to the
load device in case of mains power failure. This is the basic working of UPS.

Every UPS has a semiconductor static switch, which is used to switch power between the
main AC supply and batteries to the load. Failure of this switch can cause the UPS to be worthless
because it will damage the working of UPS.

The basic components of UPS are – a rectifier (conversion of AC to DC for feeding


batteries), inverter (conversion of DC to AC for feeding load), battery (for providing DC power to
the inverter), and a semiconductor switch for switching load transfer between mains AC supply
and inverter supply.

Offline UPS Working Principle


The Offline UPS is the simplest one of all types. As you can see, the load is normally
supplied power from mains AC supply.
The AC supply is also used to charge the battery bank. The battery is used to feed DC power
to the inverter; for converting it to AC supply.

When the mains supply fails, the switch will


automatically cut off power from it and supply power to
the load from the inverter circuit.
The switching time is usually around 25 milliseconds.
This type of inverter is the least expensive one; because
the main issue is the large switching time. The offline
UPS is also called standby UPS.

Online UPS Working Principle


The online UPS is a complex type of UPS. As you can see, the load is normally supplied
power from the inverter.
The AC supply is used to charge the battery bank through a rectifier, as well as supply DC
power to the inverter.

When the main supply fails, the battery will


automatically supply power to the inverter. The
rectifier will be bypassed in this case.
One more added feature is the addition of a static
bypass switch. When the UPS fails (means failure of
the inverter, battery, or rectifier), this switch is
turned on which supplies direct AC power to the load.

The switching time is usually very low (around


4-5 milliseconds; between inverter circuit and static bypass switch). Apart from this, the online
UPS is very fast in operation; because the battery immediately supplies power to the inverter in
case of mains AC failure.

55 | G. Hemasundara Rao, M.C.A.,


UPS Maintenance :
1. Put safety first. Life and limb should trump dollars every time. When you’re dealing with
electrical power, you’re always one small blunder away from serious injury or death. So when
dealing with UPS (or any electrical system in the data center), make sure that safety is a top
priority: that includes observing manufacturer recommendations,

2. Schedule maintenance—and stick with it. Preventive maintenance shouldn’t be


something that you’ll just “get around to,” particularly given the potential costs of downtime. For
your UPS—as with other data center systems—you should schedule regular maintenance
activities (annual, semiannual or whatever the time frame) and stick with that schedule.

3. Keep detailed records. In addition to scheduling maintenance, you should also keep
records of the kinds of maintenance performed (for instance, cleaning, repair or replacement of
certain components) and the condition of the equipment during inspection. Keeping track of costs
can also be beneficial when you need to show the C-suite that a few dollars in maintenance costs
beats thousands or millions in downtime costs every time. A checklist of tasks, such as inspecting
batteries for corrosion, looking for excessive torque on connecting leads and so on, helps maintain
an orderly approach.

4. Perform regular inspections. Much of the above can apply to almost any part of the data
center: enforcing safety, scheduling maintenance and keeping good records are all excellent
practices regardless of the data center context. A few important UPS maintenance tasks include
the following:

o Visually inspect of the area around UPS and battery (or other energy-storage)
equipment for obstructions and proper cooling.
o Ensure no operating abnormalities or warnings have registered on the UPS panel, such
as an overload or a battery near discharge.
o Look over batteries for signs of corrosion or other defects.

5. Recognize that UPS components will fail. This may seem obvious: anything with a finite
probability of failure will fail eventually. Eaton notes that “critical [UPS] components such as
batteries and capacitors wear out from normal use,” so even if your utility provides perfect power,
your UPS room is perfectly clean and consistently at the proper temperature, and everything is
running ideally, components will still fail. Your (yes, your) UPS system requires maintenance.

6. Know whom to call when you need service or unscheduled maintenance. During daily
or weekly inspections, problems can arise that may not be able to wait until the next scheduled
maintenance. In these cases, knowing whom to call can save a lot of stress. That means you must
identify solid service providers that will be available when you need them (i.e., at odd hours).

7. Assign tasks. “Weren’t you supposed to check that last week?” “No, I thought you were.”
Avoid this mess: ensure that the appropriate personnel know their responsibilities when it comes
to UPS maintenance. Who checks the equipment weekly? Who calls to schedule (or perhaps adjust
the schedule for) annual maintenance with the service provider?

TROUBLESHOOTING UPS :

56 | G. Hemasundara Rao, M.C.A.,


Problem Possible Cause Solution

The UPS will not turn on or The unit has not been turned Press the ON button once to turn on the UPS.
there is no output on. Please note that the LCD screen may be lit
even though the UPS is OFF.

The UPS is not connected to Be sure that the power cable is securely
utility power. connected to the unit and to the utility power
supply.
The input circuit breaker has Reduce the load to the UPS, disconnect
tripped. nonessential equipment and reset the circuit
breaker.

The unit shows very low or Check the utility power supply to the UPS by
no input utility voltage plugging in a table lamp. If the light is very
dim, check the utility voltage.

The battery connector plug Be sure that all battery connections are secure.
is not securely connected.
There is an internal UPS Do not attempt to use the UPS. Unplug the UPS
fault. and have it serviced immediately
The UPS is operating on The input circuit breaker has Reduce the load to the UPS, disconnect
battery, while connected tripped. nonessential equipment and reset the circuit
to utility power breaker.

There is very high, very low, Move the UPS to a different outlet on a
or distorted input line different circuit. Test the input voltage with
voltage. the utility voltage display. If acceptable to the
connected equipment, reduce the UPS
sensitivity.

UPS is beeping The UPS is in normal The UPS display will indicate the current
operation. operating mode that is causing the beeping.
(On battery, replace battery, etc..)
UPS does not provide The UPS battery is weak due Charge the battery. Batteries require
expected backup time to a recent outage or is near recharging after extended outages and wear
the end of its service life. out faster when put into service often or when
operated at elevated temperatures.

The UPS is overloaded. Check the UPS load display. Unplug


unnecessary equipment, such as printers
Display interface LEDs The UPS has been shut down None. The UPS will restart automatically when
flash sequentially remotely through software utility power is restored.
or an optional accessory
card.
Fault LED illuminates Possible Internal Fault Contact APC Support for further assistance.
Take care to note the exact fault message on
the LCD Display

All LEDs are illuminated The UPS has shut down and None. The UPS will return to normal operation
and the UPS is plugged the battery has discharged when the power is restored and the battery
into a wall outlet from an extended outage. has a sufficient charge.

57 | G. Hemasundara Rao, M.C.A.,


The Replace Battery LED is The battery has a weak Allow the battery to recharge for at least four
illuminated charge hours. Then, perform a self-test. If the problem
persists after recharging, replace the battery

The replacement battery is Be sure that the battery connector is securely


not properly connected. connected.
The UPS displays a site Wiring faults detected If the UPS indicates a site wiring fault, have a
wiring fault message include missing ground, hot- qualified electrician inspect the building
neutral, polarity reversal, wiring. (Applicable for 120 V units only.)
and overloaded neutral
circuit.

58 | G. Hemasundara Rao, M.C.A.,

You might also like