Pc-Hw-Net - Unit-1
Pc-Hw-Net - Unit-1
Text Book:
1. “Introduction to Data Communications and Networking”, B. Forouzan,TataMcGrawHill
2. “Computer Networks”, Tanenbaum, PHI,
3. PC AND CLONES Hardware, Troubleshooting and Maintenance B. Govinda rajalu, Tata Mc-graw-Hill
Publication
Information Technology : The term information technology refers to the subjects related to
creating, managing, processing and exchanging information. It defines an Industry that uses
computers, networking, software programming, and other equipment and processes to store,
process, retrieve, transmit and protect information.
Computers are used in almost all walks of life. Computers are widely used in several
fields, such as education, communication, entertainment, banking, medicine, weather forecasting
and scientific research.
Introduction to computers :
The mechanical calculating device ABACUS was principally used to add and subtract. It was
probably used by Babylonians as early as 2200 B.C.
In 1645, The French mathematician Blaise Pascal produced Pascaline, recognized as the
first mechanical calculator but it was only capable of performing additions, subtractions.
In 1820, a young genius named Charles Babbage, an English mathematician gave much
thought to the design of a device that uses the ―differences‖. Babbage in 1822
constructed a working model to illustrate the principle the
o difference engine”. Babbage proposed a design for another device named as
o Analytical engine. Charles Babbage is known as the ―Father of modern digital
computer‖.
In 1890, Herman Hollerith had perfected his Tabulating system and developed machine
called census tabulator.
Early 1940s first electronic computers,
Computer Definition: A computer is an electronic device which is used to input the data, process
the raw data as per the instructions and display the result or Output with High speed an Accuracy.
(Or)
A computer is an electronic device which processes raw data according to the specified
instructions and produce the information as output with high speed and accuracy.
(Or)
A computer can be defined as an Electronic device that accepts data, process them at a high
speed according to a set of instructions provided to it and process the desired result.
2. Store Data(Storage)
A) Speed : A computer performs operations with High speed. It can process Millions of
Instructions in fraction of seconds. Generally the speed of computer is measured in Terms of
Microseconds (10-6), Nanoseconds (10-9) and even Picoseconds (10-12).
B) Accuracy : Computer System Always produces Accurate Results with valid data and
instructions. The computer has performed calculations 100% Error free. Computers never make a
mistake.
D) Versatile : Computer is a versatile machine. Computers can do variety of jobs depending upon
the Instructions to them. It can perform multiple tasks simultaneously. The same machine works
in different fields with different applications to perform various tasks.
Examples are playing a game, print a document, and send E- mails, Etc.
E) Storage : A Computer can store large volumes of data in the small Devices. We can store any
kind of data in computers storage. This data can be text, picture, sound, video and programs Etc.
F) Reliability : computerized storage of data is much more reliable than the manual storage.
Computers have built-in diagnostic capabilities, which can help in continuous monitoring of the
system.
Computer Limitations:
f) Computer cannot make Judgment which a human being makes in day to day life.
Applications of Computers (or) Uses of Computers : Computers are widely used in several
fields, such as business, education, communication, entertainment, banking, medicine, weather
forecasting and scientific research etc
BLOCK DIAGRAM OF COMPUTER (OR) COMPUTER ORGANIZATION: The computer can vary in
size, speed and capacity depending on circuit or hardware design but it has same functional
organization. All types of computers follow a same basic logical structure for converting raw data
into information.
1. Input Unit
3. Output Unit
1 Input Unit: This unit contains devices with the help which we enter data and instructions into
computer. An Input device converts the data and instructions into Binary form (Machine) for
acceptance by the computer. The most commonly used Input device is a Keyboard.
2. Central processing Unit (C.P.U): The CPU is considered as the ―Brain of the Computer. It is
also called as Microprocessor. All major calculations and comparisons are made inside the CPU.
i) Arithmetic and Logical Unit (A.L.U): It is the unit where the actual executions of instructions
are takes place. All the Arithmetic calculations such as Addition, subtraction, multiplication and
Division as well as all the Logical (decision making) calculations i.e. comparisons are done in ALU.
ii) Control Unit (C.U): The control unit controls all the activities of the computer, it also control
each and every part of the computer. Control unit acts as monitor that tells other components
what to do, when to do, and how to do.
iii) Memory Unit (M.U): This unit can also be called as storage unit and it is used to store, retrieve
instructions and data. There are two types of memory.
3. Output Unit: Output unit receives the stored results from memory unit, converts it into a form
the user can understand. This unit supplies converted results to the outside world through output
device. Some generally used output devices are monitor, printers.
The classification may depend on size, technology, area of technology, type of data processed etc.
General Purpose Computers: These computers can be used for all general needs of all
environments & users. These are the versatile computers that can perform a variety of jobs for a
variety of Environments. Some general works are calculating accounts, writing letters, playing
games, watching movies and accessing Internet etc. Ex: personal computers.
Special Purpose Computers: These computers are specially designed to perform a specific task of a
specific environment. That‘s why these computers are not versatile. They are designed, made and
used for only a single job. Ex: Super computers.
II. According to Logical Technology: According to the logic used by the computer, it can be
classified into Analog, Digital, and Hybrid computers.
Analog Computers: Analog computers are used to measure the physical quantities like pressure,
temperature, speed etc. These computers accept input data in the form of signals and convert
them to numeric values. For example: A thermometer does not perform any calculations but
measures the temperature of the body.
Digital Computers: The computers which accept the data in the form of binary digits (bits)
representing high (1) or low (0) signals are called digital computers. These computers basically
work by counting and adding the binary digits.
Hybrid Computers: These computers have the features of both digital and analog computers.
These computers are useful in those environments, where both digital and analog signals are used
in processing.
Classification of Digital Computer Systems: According to the size, application areas and
capabilities, computers can be classified as Micro, Mini, Mainframe and Super computers.
2. Mainframe computer
3. Mini computer
4. Micro computer
a) Desktop
b) Laptop
c) Hand-Held Models
Super Computers:
i. These computers are characterized as being the fastest, with very high processing speed,
very large size, most powerful and most expensive (millions of dollars).
ii. These computers contain multiple processors that work together to solve a single problem.
iii. These computers have huge main memories and secondary storage.
a. Weather forecasting
b. Defense
d. Genetic Engineering
e. Geological Data
Mainframes Computers:
i. A mainframe computer is a very large size computer; require proper air conditioning and
ii. These computers are used in large organizations like government agencies, Banks,
iii. There are basically two types of terminals that can be used with mainframe systems that
are
a. Dumb terminals: Dumb terminals consist of only a monitor and a keyboard or mouse.
They do not have their own CPU and memory and use mainframe system‘s CPU and
memory.
b. Intelligent terminals: Intelligent terminals have their own processor and it can
perform some processing operations. These are used as servers on the World Wide Web.
Mini Computers :
ii. These computers are widely used in business, Education, Hospitals Etc. They are also
used as servers in Local Area Networks (LAN).
iii. A large number of computers can be connected to a network with a Mini computer acting
Micro Computers :
i. The term ‗Micro‘ suggests only the size but not the capacity, they are capable to do all Input -
Output operations.
ii. In Micro computers, microprocessor performs the function of ALU and CU and it is connected
with primary, secondary memory and I/O devices.
iii. A Microcomputer is a computer designed for individual use. These include Desktop PC, Laptop
and handheld models Etc.
1. These computers were physically large in size and required large rooms for installation.
2. Magnetic Drums were used for memory; Input was based on punched cards and paper
tape, the output was generated on printouts.
3. They also consume high electricity, they generate lot of heat. These computers require
continuous maintenance and large Air-conditioners.
4. They lacked in versatility and speed.
6. These computers could be programmed using machine language, which is the lowest-level
of programming language. (Since machine language was used, these computers were
difficult to program and use).
8. The UNIVAC and ENIAC Computers are examples of first generation computers.
Second Generation (1960s) – Transistor : The vacuum tubes of First Generation are replaced
by Transistors to arrive at second generation.
1. Since Transistor is a small device, the physical size of computer was greatly reduced.
2. Computers became smaller, faster, cheaper, energy–efficient and more reliable than their
predecessors.
3. These were more portable and generate less amount of heat.
4. Magnetic core was used as primary memory and Magnetic disks as secondary storage
devices. However Input was based on punched cards and paper tape, the output was
generated on printouts.
7. Assembly language was used to program computers. Hence, programming became more
time efficient and less cumbersome.
8. High level programming languages COBOL, FORTRAN Are developed.
1. They were smaller, cheaper, and more reliable than their predecessors.
3. Instead of punched cards and printouts, users interacted with third generation computers
through keyboards and monitors.
4. They had high processing speed than second generation.
5. Since hardware rarely failed, the maintenance cost was quite low.
1. The fourth generation computers become more powerful, compact, reliable and affordable.
2. These machines consume less power and generate negligible amount of heat, hence they
do not require air conditioning.
3. Hardware failure is negligible so minimum maintenance is required.
Fifth generation : computers, based on Artificial Intelligence (AI) are still in development. The
Speed is extremely high in Fifth generation. SLSI, AI and Parallel Processing are developed in this
generation.
Mega Chips: Fifth generation computers will use Super Large Scale Integrated (SLSI) chips, which
will result in the Production of Microprocessors having millions of electronic components on a
single chip.
Parallel Processing: A computer using parallel processing accesses several instructions at once
and works on them at the same time through multiple central processing units.
TYPES OF LANGUAGES :
computer programming language, any of various languages for expressing a set of detailed
instructions for a digital computer. Such instructions can be executed directly when they are in the
computer manufacturer-specific numerical form known as machine language, after a simple
substitution process when expressed in a corresponding assembly language, or after translation
from some “higher-level” language. Although there are many computer languages, relatively few
are widely used.
Machine and assembly languages are “low-level,” requiring a programmer to manage
explicitly all of a computer’s features of data storage and operation. In contrast, high-level
languages shield a programmer from worrying about such considerations and provide a notation
that is more easily written and read by programmers.
Language types :
Machine and assembly languages :
A machine language consists of the numeric codes for the operations that a particular
computer can execute directly. The codes are strings of 0s and 1s, or binary digits (“bits”), which
are frequently converted both from and to hexadecimal (base 16) for human viewing and
modification. Machine language instructions typically use some bits to represent operations, such
as addition, and some to represent operands, or perhaps the location of the next instruction.
Machine language is difficult to read and write, since it does not resemble conventional
mathematical notation or human language, and its codes vary from computer to computer.
Assembly language is one level above machine language. It uses short mnemonic codes for
instructions and allows the programmer to introduce names for blocks of memory that hold data.
One might thus write “add pay, total” instead of “0110101100101000” for an instruction that adds
two numbers.
FORTRAN :
The first important algorithmic language was FORTRAN (formula translation), designed in
1957 by an IBM team led by John Backus. It was intended for scientific computations with real
numbers and collections of them organized as one- or multidimensional arrays. Its control
structures included conditional IF statements, repetitive loops (so-called DO loops), and a GOTO
statement that allowed non-sequential execution of program code. FORTRAN made it convenient
to have subprograms for common mathematical operations, and built libraries of them.
FORTRAN was also designed to translate into efficient machine language. It was immediately
successful and continues to evolve.
ALGOL
ALGOL (algorithmic language) was designed by a committee of American and European
computer scientists during 1958–60 for publishing algorithms, as well as for doing computations.
Like LISP (described in the next section), ALGOL had recursive subprograms—procedures that
could invoke themselves to solve a problem by reducing it to a smaller problem of the same kind.
ALGOL introduced block structure, in which a program is composed of blocks that might contain
both data and instructions and have the same structure as an entire program. Block structure
became a powerful tool for building large programs out of small components.
ALGOL contributed a notation for describing the structure of a programming language, Backus–
Naur Form, which in some variation became the standard tool for stating the syntax (grammar) of
programming languages. ALGOL was widely used in Europe, and for many years it remained the
language in which computer algorithms were published. Many important languages, such
as Pascal and Ada (both described later), are its descendants.
C
The C programming language was developed in 1972 by Dennis Ritchie and Brian
Kernighan at the AT&T Corporation for programming computer operating systems. Its capacity to
structure data and programs through the composition of smaller units is comparable to that of
ALGOL. It uses a compact notation and provides the programmer with the ability to operate with
the addresses of data as well as with their values. This ability is important in systems
programming, and C shares with assembly language the power to exploit all the features of a
computer’s internal architecture. C, along with its descendant C++, remains one of the most
common languages.
Business-oriented languages
COBOL
COBOL (common business oriented language) has been heavily used by businesses since its
inception in 1959. A committee of computer manufacturers and users and U.S. government
organizations established CODASYL (Committee on Data Systems and Languages) to develop and
oversee the language standard in order to ensure its portability across diverse systems.
COBOL uses an English-like notation—novel when introduced. Business computations organize
and manipulate large quantities of data, and COBOL introduced the record data structure for such
tasks. A record clusters heterogeneous data—such as a name, an ID number, an age, and an
14 | G. Hemasundara Rao, M.C.A.,
address—into a single unit. This contrasts with scientific languages, in which homogeneous arrays
of numbers are common. Records are an important example of “chunking” data into a single
object, and they appear in nearly all modern languages.
SQL
SQL (Structured Query Language) is a language for specifying the organization
of databases (collections of records). Databases organized with SQL are called relational, because
SQL provides the ability to query a database for information that falls in a given relation. For
example, a query might be “find all records with both last name Smith and city New York.”
Commercial database programs commonly use an SQL-like language for their queries.
Education-oriented languages
BASIC
BASIC (beginner’s all-purpose symbolic instruction code) was designed at Dartmouth
College in the mid-1960s by John Kemeny and Thomas Kurtz. It was intended to be easy to learn
by novices, particularly non-computer science majors, and to run well on a time-sharing
computer with many users. It had simple data structures and notation and it was interpreted: a
BASIC program was translated line-by-line and executed as it was translated, which made it easy
to locate programming errors.
Its small size and simplicity also made BASIC a popular language for early personal computers. Its
recent forms have adopted many of the data and control structures of other contemporary
languages, which makes it more powerful but less convenient for beginners.
Pascal
About 1970 Niklaus Wirth of Switzerland designed Pascal to teach structured
programming, which emphasized the orderly use of conditional and loop control structures
without GOTO statements. Although Pascal resembled ALGOL in notation, it provided the ability to
define data types with which to organize complex information, a feature beyond the capabilities of
ALGOL as well as FORTRAN and COBOL. User-defined data types allowed the programmer to
introduce names for complex data, which the language translator could then check for correct
usage before running a program.
During the late 1970s and ’80s, Pascal was one of the most widely used languages for
programming instruction. It was available on nearly all computers, and, because of its familiarity,
clarity, and security, it was used for production software as well as for education.
Logo
Logo originated in the late 1960s as a simplified LISP dialect for education; Seymour
Papert and others used it at MIT to teach mathematical thinking to schoolchildren. It had a more
conventional syntax than LISP and featured “turtle graphics,” a simple method for
generating computer graphics. (The name came from an early project to program a turtlelike
robot.) Turtle graphics used body-centred instructions, in which an object was moved around a
screen by commands, such as “left 90” and “forward,” that specified actions relative to the current
position and orientation of the object rather than in terms of a fixed framework. Together with
recursive routines, this technique made it easy to program intricate and attractive patterns.
Hypertalk
Hypertalk was designed as “programming for the rest of us” by Bill Atkinson for Apple’s
Macintosh. Using a simple English-like syntax, Hypertalk enabled anyone to combine text,
15 | G. Hemasundara Rao, M.C.A.,
graphics, and audio quickly into “linked stacks” that could be navigated by clicking with
a mouse on standard buttons supplied by the program. Hypertalk was particularly popular among
educators in the 1980s and early ’90s for classroom multimedia presentations. Although
Hypertalk had many features of object-oriented languages (described in the next section), Apple
did not develop it for other computer platforms and let it languish; as Apple’s market share
declined in the 1990s, a new cross-platform way of displaying multimedia left Hypertalk all but
obsolete (see the section World Wide Web display languages).
Object-oriented languages :
Object-oriented languages help to manage complexity in large programs. Objects package
data and the operations on them so that only the operations are publicly accessible and internal
details of the data structures are hidden. This information hiding made large-scale programming
easier by allowing a programmer to think about each part of the program in isolation. In addition,
objects may be derived from more general ones, “inheriting” their capabilities. Object-oriented
programming began with the Simula language (1967), which added information hiding to ALGOL.
C++ :
The C++ language, developed by Bjarne Stroustrup at AT&T in the mid-1980s,
extended C by adding objects to it while preserving the efficiency of C programs. It has been one of
the most important languages for both education and industrial programming. Large parts of
many operating systems were written in C++. C++, along with Java, has become popular for
developing commercial software packages that incorporate multiple interrelated applications. C++
is considered one of the fastest languages and is very close to low-level languages, thus allowing
complete control over memory allocation and management. This very feature and its many other
capabilities also make it one of the most difficult languages to learn and handle on a large scale.
C#
C# (pronounced C sharp like the musical note) was developed by Anders Hejlsberg at
Microsoft in 2000. C# has a syntax similar to that of C and C++ and is often used for developing
games and applications for the Microsoft Windows operating system.
Ada
Ada was named for Augusta Ada King, countess of Lovelace, who was an assistant to the
19th-century English inventor Charles Babbage, and is sometimes called the first computer
programmer. Ada, the language, was developed in the early 1980s for the U.S. Department of
Defense for large-scale programming. It combined Pascal-like notation with the ability to package
operations and data into independent modules. Its first form, Ada 83, was not fully object-
oriented, but the subsequent Ada 95 provided objects and the ability to construct hierarchies of
them. While no longer mandated for use in work for the Department of Defense, Ada remains an
effective language for engineering large programs.
Java
In the early 1990s Java was designed by Sun Microsystems, Inc., as a programming
language for the World Wide Web (WWW). Although it resembled C++ in appearance, it was
object-oriented. In particular, Java dispensed with lower-level features, including the ability to
manipulate data addresses, a capability that is neither desirable nor useful in programs for
distributed systems. In order to be portable, Java programs are translated by a Java Virtual
Machine specific to each computer platform, which then executes the Java program. In addition to
16 | G. Hemasundara Rao, M.C.A.,
adding interactive capabilities to the Internet through Web “applets,” Java has been widely used
for programming small and portable devices, such as mobile telephones.
Visual Basic
Visual Basic was developed by Microsoft to extend the capabilities of BASIC by adding
objects and “event-driven” programming: buttons, menus, and other elements of graphical user
interfaces (GUIs). Visual Basic can also be used within other Microsoft software to program small
routines. Visual Basic was succeeded in 2002 by Visual Basic .NET, a vastly different language
based on C#, a language with similarities to C++.
Python
The open-source language Python was developed by Dutch programmer Guido van Rossum
in 1991. It was designed as an easy-to-use language, with features such as using indentation
instead of brackets to group statements. Python is also a very compact language, designed so that
complex jobs can be executed with only a few statements. In the 2010s, Python became one of the
most popular programming languages, along with Java and JavaScript.
Declarative languages
Declarative languages, also called nonprocedural or very high level, are programming
languages in which (ideally) a program specifies what is to be done rather than how to do it. In
such languages there is less difference between the specification of a program and its
implementation than in the procedural languages described so far. The two common kinds of
declarative languages are logic and functional languages.
Logic programming languages, of which PROLOG (programming in logic) is the best known,
state a program as a set of logical relations (e.g., a grandparent is the parent of a parent of
someone). Such languages are similar to the SQL database language. A program is executed by an
“inference engine” that answers a query by searching these relations systematically to
make inferences that will answer a query. PROLOG has been used extensively in natural language
processing and other AI programs.
Scripting languages
Scripting languages are sometimes called little languages. They are intended to solve
relatively small programming problems that do not require the overhead of data declarations and
other features needed to make large programs manageable. Scripting languages are used for
writing operating system utilities, for special-purpose file-manipulation programs, and, because
they are easy to learn, sometimes for considerably larger programs.
Perl was developed in the late 1980s, originally for use with the UNIX operating system. It
was intended to have all the capabilities of earlier scripting languages. Perl provided many ways
to state common operations and thereby allowed a programmer to adopt any convenient style. In
the 1990s it became popular as a system-programming tool, both for small utility programs and
for prototypes of larger ones. Together with other languages discussed below, it also became
popular for programming computer Web “servers.”
17 | G. Hemasundara Rao, M.C.A.,
Document formatting languages
Document formatting languages specify the organization of printed text and graphics. They
fall into several classes: text formatting notation that can serve the same functions as a word
processing program, page description languages that are interpreted by a printing device, and,
most generally, markup languages that describe the intended function of portions of a document.
TeX
TeX was developed during 1977–86 as a text formatting language by Donald Knuth,
a Stanford University professor, to improve the quality of mathematical notation in his books. Text
formatting systems, unlike WYSIWYG (“What You See Is What You Get”) word
processors, embed plain text formatting commands in a document, which are then interpreted by
the language processor to produce a formatted document for display or printing. TeX marks italic
text, for example, as {\it this is italicized}, which is then displayed as this is italicized.
PostScript
PostScript is a page-description language developed in the early 1980s by Adobe Systems
Incorporated on the basis of work at Xerox PARC (Palo Alto Research Center). Such languages
describe documents in terms that can be interpreted by a personal computer to display the
document on its screen or by a microprocessor in a printer or a typesetting device.
SGML
SGML (standard generalized markup language) is an international standard for the
definition of markup languages; that is, it is a meta language. Markup consists of notations called
tags that specify the function of a piece of text or how it is to be displayed.
SGML is used to specify DTDs (document type definitions). A DTD defines a kind of
document, such as a report, by specifying what elements must appear in the document—e.g.,
<Title>—and giving rules for the use of document elements, such as that a paragraph may appear
within a table entry but a table may not appear within a paragraph.
Web scripting :
Web pages marked up with HTML or XML are largely static documents. Web scripting can
add information to a page as a reader uses it or let the reader enter information that may, for
example, be passed on to the order department of an online business. CGI (common gateway
interface) provides one mechanism; it transmits requests and responses between the reader’s
Web browser and the Web server that provides the page. The CGI component on the server
contains small programs called scripts that take information from the browser system or provide
it for display. A simple script might ask the reader’s name, determine the Internet address of the
system that the reader uses, and print a greeting. Scripts may be written in any programming
language, but, because they are generally simple text-processing routines, scripting languages like
PERL are particularly appropriate.
Elements of programming :
Despite notational differences, contemporary computer languages provide many of the
same programming structures. These include basic control structures and data structures. The
former provide the means to express algorithms, and the latter provide ways to organize
information.
Control structures :
Programs written in procedural languages, the most common kind, are like recipes, having
lists of ingredients and step-by-step instructions for using them. The three basic control structures
in virtually every procedural language are:
1. Sequence—combine the liquid ingredients, and next add the dry ones.
2. Conditional—if the tomatoes are fresh then simmer them, but if canned, skip this step.
3. Iterative—beat the egg whites until they form soft peaks.
Sequence is the default control structure; instructions are executed one after another. They
might, for example, carry out a series of arithmetic operations, assigning results to variables, to
find the roots of a quadratic equation ax2 + bx + c = 0. The conditional IF-THEN or IF-THEN-ELSE
control structure allows a program to follow alternative paths of execution. Iteration, or looping,
gives computers much of their power. They can repeat a sequence of steps as often as necessary,
and appropriate repetitions of quite simple steps can solve complex problems.
These control structures can be combined. A sequence may contain several loops; a loop may
contain a loop nested within it, or the two branches of a conditional may each contain sequences
with loops and more conditionals. In the “pseudocode” used in this article, “*” indicates
19 | G. Hemasundara Rao, M.C.A.,
multiplication and “←” is used to assign values to variables. The following programming fragment
employs the IF-THEN structure for finding one root of the quadratic equation, using the quadratic
formula:
.The quadratic formula assumes that a is nonzero and that the discriminant (the portion
within the square root sign) is not negative (in order to obtain a real number root). Conditionals
check those assumptions:
IF a = 0 THEN
ROOT ← −c/b
ELSE
DISCRIMINANT ← b*b − 4*a*c
IF DISCRIMINANT ≥ 0 THEN
ROOT ← (−b + SQUARE_ROOT(DISCRIMINANT))/2*a
ENDIF
ENDIF
Data structures
Whereas control structures organize algorithms, data structures organize information. In
particular, data structures specify types of data, and thus which operations can be performed on
them, while eliminating the need for a programmer to keep track of memory addresses. Simple
data structures include integers, real numbers, Booleans (true/false), and characters or character
strings. Compound data structures are formed by combining one or more data types.
Record components, or fields, are selected by name; for example, E.SALARY might
represent the salary field of record E. An array element is selected by its position or index; A[10] is
the element at position 10 in array A. A FOR loop (definite iteration) can thus run through an array
with index limits (FIRST TO LAST in the following example) in order to sum its elements:
Arrays and records have fixed sizes. Structures that can grow are built
with dynamic allocation, which provides new storage as required. These data structures have
components, each containing data and references to further components (in machine terms, their
addresses). Such self-referential structures have recursive definitions.
Abstract data types (ADTs) are important for large-scale programming. They package
data structures and operations on them, hiding internal details. For example, an ADT table
provides insertion and lookup operations to users while keeping the underlying structure,
whether an array, list, or binary tree, invisible. In object-oriented languages, classes are ADTs and
objects are instances of them.
The computer only understands machine code. It is unable to understand any low, assembly, or
high-level language. There must be a program to convert the source code into object code so that
your computer can understand it. This is the job of the language translator. The programmer
creates source code and then converts it to machine-readable format (object code)
Compiler
The compiler is a language translator program that converts code written in a human-
readable language, such as high-level language, to a low-level computer language, such as
assembly language, machine code, or object code, and then produces an executable program.
In the process of compiling, the first code is sent to a lexer which will scan the source code
and split it into tokens and kept inside of computer memory, and send them to the parser where
patterns are recognized and are converted into an AST (abstract syntax tree) which describes the
data structure of the program representing then optimizer(if required) optimize away unused
variable, unreachable code, roll back if possible, etc, then code generator converts to machine
instruction code specific to the target platform and linker putting together all code into an
executable program. Also, there is an error handler in all the phases which handles errors and
reports.
Borland Turbo C
Javac
GNU compiler
Xcode
Roslyn
21 | G. Hemasundara Rao, M.C.A.,
Visual C#
CLISP
Oracle Fortran
Characteristics of Compiler
Source code is converted to machine code before runtime. So, code execution at runtime is
faster.
Takes a lot of time to analyze and process the program. The compiling process is
complicated.
But Program execution is Fast
Cannot create an executable program when there is a compile type error in the program.
Advantage of Compiler
The whole program is compiled and it seems to be more secure than Interpreted Code.
Code once compiled and when you view the compiled code then you will not be able to
understand it.
Compiled Code is faster because compiled code is near to machine code.
The program can run directly from object code and doesn't need source code.
The compiler translates commands into machine language binaries, no other program or
application is needed to be installed to execute the executable file of sources codes. The
only thing needed is that each software has to be compiled for certain operating systems. If
an application is compiled for a particular OS architecture, the user simply needs to OS that
operates on the same OS architecture.
Disadvantage of Compiler
For the executable file to be created, the source code must be error-free.
For a large application, it may take a larger time to compile the code as compared to small
programs.
When you compiled an application then it creates a new compiled file which takes
additional memory and space.
Interpreter
The interpreter converts high-level language to machine-level language, while the compiler
accomplishes the same but in a different method. The Interpreter's source code is transformed
into machine code at run time. The compiler, however, converts the code to machine code, i.e. an
executable file, before the program starts.
The interpreter program executes directly line by line by running the source code. So, it
takes the source code, one line at a time, and translates it and runs it by the processor, then moves
to the next line, translates it and runs it, and repeats until the program is finished.
Some of the popular interpreted languages are Php, Python, Java script, Ruby
22 | G. Hemasundara Rao, M.C.A.,
Characteristics of Interpreter
Advantage of Interpreter
Some of the main advantages of interpreters are as follows:
Disadvantage of Interpreter
Some of the main disadvantages of Interpreter are as follows:
Interpreter is slower
As interpreted codes can easily be read by humans so we can say data and code are
insecure.
To run the code, a client or anybody else who has access to the shared source code must
have an interpreter installed on their system.
Assembler
Assembler converts code written in assembly language into machine-level code. Assembly
language contains machine opcode mnemonics so that assemblers translate from mnemonics to
direct instruction in 1:1 relation.
As we know the computer understands machine code only but programming is difficult for
developers in machine language. So, low-level assembly language(ASM) is designed for a specific
processor family that represents different symbolic code instructions.
Characteristics of Assembler
As there is a 1:1 relationship exists between mnemonics to direct instruction, translating is
very fast.
It requires less amount of memory and execution time.
It does complex hardware-specific jobs in an easy way.
It is ideal for a time-critical job.
Compiler vs Interpreter
Compiler Interpreter
Translate High-level language program into Translate High-level language program into
machine code before runtime machine code at runtime
The compiler needs a lot of time for the whole so It takes less time for the source code to
urce code to be analyzed analyze
The overall program execution time is relatively Overall program execution time is relatively
faster. slower.
The compiler only generates an error message Interpreter only shows one error at a time
only after scanning the whole program. And all and if solved and again after interpreting the
the errors are shown at the same time. code then shows the next error if exists.
It is easier to debug since it continues to
Debugging is relatively more difficult since there translate the program until the error is fixed.
can be an error anywhere in the code. Show only one error at a time, and if solved
then shows the next error if exists.
The interpreter does not generate
The compiler generates intermediate code.
intermediate code.
Compilation and execution take place
The compiler compiles the code before execution.
simultaneously.
Memory requirements are more because time is
required for the creation of intermediate object Requires less memory as it does not create
code. This intermediate object code resides in intermediate object code.
memory.
Difficult error detection and removal Easy for error detection and removal
The programming language that uses Compiler: C, The programming language that uses
C++, Java, C#, Scala Interpreters: Python, Perl, Ruby, PHP
The interpreter software is generally smaller
Compilers software are larger in size.
in size.
Focus on compile once, run anytime Focus on compiling every time.
The program doesn't run until all the error is A program runs the code and stops only
fixed. when an error is found.
19th Century
1801 – Joseph Marie Jacquard, a weaver and businessman from France, devised a loom that
employed punched wooden cards to automatically weave cloth designs.
1822 – Charles Babbage, a mathematician, invented the steam-powered calculating machine capable
of calculating number tables. The “Difference Engine” idea failed owing to a lack of technology at the
time.
1848 – The world’s first computer program was written by Ada Lovelace, an English mathematician.
Lovelace also includes a step-by-step tutorial on how to compute Bernoulli numbers using Babbage’s
machine.
1890 – Herman Hollerith, an inventor, creates the punch card technique used to calculate the 1880
U.S. census. He would go on to start the corporation that would become IBM.
21st Century
2000 – The USB flash drive is first introduced in 2000. They were speedier and had more storage
space than other storage media options when used for data storage.
2001 – Apple releases Mac OS X, later renamed OS X and eventually simply macOS, as the successor
to its conventional Mac Operating System.
2003 – Customers could purchase AMD’s Athlon 64, the first 64-bit CPU for consumer computers.
2004 – Facebook began as a social networking website.
2005 – Google acquires Android, a mobile phone OS based on Linux.
2006 – Apple’s MacBook Pro was available. The Pro was the company’s first dual-core, Intel-based
mobile computer.
Types of Computers :
1. Analog Computers – Analog computers are built with various components such as gears and
levers, with no electrical components. One advantage of analogue computation is that
designing and building an analogue computer to tackle a specific problem can be quite
straightforward.
2. Digital Computers – Information in digital computers is represented in discrete form,
typically as sequences of 0s and 1s (binary digits, or bits). A digital computer is a system or
gadget that can process any type of information in a matter of seconds. Digital computers are
categorized into many different types. They are as follows:
a. Mainframe computers – It is a computer that is generally utilized by large enterprises for
mission-critical activities such as massive data processing. Mainframe computers were
distinguished by massive storage capacities, quick components, and powerful
computational capabilities. Because they were complicated systems, they were managed
by a team of systems programmers who had sole access to the computer. These machines
are now referred to as servers rather than mainframes.
b. Supercomputers – The most powerful computers to date are commonly referred to as
supercomputers. Supercomputers are enormous systems that are purpose-built to solve
complicated scientific and industrial problems. Quantum mechanics, weather forecasting,
oil and gas exploration, molecular modelling, physical simulations, aerodynamics, nuclear
fusion research, and cryptoanalysis are all done on supercomputers.
c. Minicomputers – A minicomputer is a type of computer that has many of the same
features and capabilities as a larger computer but is smaller in size. Minicomputers, which
were relatively small and affordable, were often employed in a single department of an
organization and were often dedicated to a specific task or shared by a small group.
d. Microcomputers – A microcomputer is a small computer that is based on a
microprocessor integrated circuit, often known as a chip. A microcomputer is a system
that incorporates at a minimum a microprocessor, program memory, data memory, and
input-output system (I/O). A microcomputer is now commonly referred to as a personal
computer (PC).
e. Embedded processors – These are miniature computers that control electrical and
mechanical processes with basic microprocessors. Embedded processors are often simple
in design, have limited processing capability and I/O capabilities, and need little power.
Ordinary microprocessors and microcontrollers are the two primary types of embedded
processors. Embedded processors are employed in systems that do not require the
28 | G. Hemasundara Rao, M.C.A.,
computing capability of traditional devices such as desktop computers, laptop computers,
or workstations.
Computer Memory :
A computer is a device that is electronic and that accepts data, processes that data, and
gives the desired output. It performs programmed computation with great accuracy & higher
speed. Or in other words, the computer takes data as input and stores the data/instructions in the
memory (use them when required). It then processes the data and converts it into useful
information. Finally, it gives the output.
What is Memory :
Computer memory is just like the human brain. It is used to store data/information and
instructions. It is a data storage unit or a data storage device where data is to be processed and
instructions required for processing are stored. It can store both the input and output can be
stored here.
Primary memory
Secondary memory
Cache memory
1. Primary Memory: It is also known as the main memory of the computer system. It is used
to store data and programs or instructions during computer operations. It uses
semiconductor technology and hence is commonly called semiconductor memory. Primary
memory is of two types:
(i) RAM (Random Access Memory): It is a volatile memory. Volatile memory stores information
based on the power supply. If the power supply fails/ interrupted/stopped, all the data &
information on this memory will be lost. RAM is used for booting up or start the computer. It
temporarily stores programs/ data which has to be executed by the processor. RAM is of two
types:
S RAM (Static RAM): It uses transistors and the circuits of this memory are capable of
retaining their state as long as the power is applied. This memory consists of the number of flip
flops with each flip flop storing 1 bit. It has less access time and hence, it is faster.
D RAM (Dynamic RAM): It uses capacitors and transistors and stores the data as a charge on
the capacitors. They contain thousands of memory cells. It needs refreshing of charge on
capacitor after a few milliseconds. This memory is slower than S RAM.
EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written
contents can be erased electrically. You can delete and reprogramme EEPROM up to 10,000
times. Erasing and programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any
area in an EEPROM can be wiped and programmed selectively.
2. Secondary Memory: It is also known as auxiliary memory and backup memory. It is a non-
volatile memory and used to store a large amount of data or information. The data or information
stored in secondary memory is permanent, and it is slower than primary memory. A CPU cannot
access secondary memory directly. The data/information from the auxiliary memory is first
transferred to the main memory, and then the CPU can access it.
(i) Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic
coating on it that is used for magnetic recording. Bits are recorded on tape as magnetic patches
called RECORDS that run along many tracks. Typically, 7 or 9 bits are recorded concurrently. Each
track has one read/write head, which allows data to be recorded and read as a sequence of
characters. It can be stopped, started moving forward or backward, or rewound.
(ii) Magnetic Disks: A magnetic disc is a circular metal or a plastic plate and these plates are
coated with magnetic material. The disc is used on both sides. Bits are stored in magnetized
Hard discs are discs that are permanently attached and cannot be removed by a single user.
(iii) Optical Disks: It’s a laser-based storage medium that can be written to and read. It is
reasonably priced and has a long lifespan. The optical disc can be taken out of the computer by
occasional users. Types of Optical Disks :
(a) CD – ROM:
It’s called Compact Disk. Only read from memory.
Information is written to the disc by using a controlled laser beam to burn pits on the disc
surface.
It has a highly reflecting surface, which is usually aluminum.
The diameter of the disc is 5.25 inches.
16000 tracks per inch is the track density.
The capacity of a CD-ROM is 600 MB, with each sector storing 2048 bytes of data.
The data transfer rate is about 4800KB/sec. & the new access time is around 80 milliseconds.
(c) DVDs:
The term “DVD” stands for “Digital Versatile/Video Disc,” and there are two sorts of DVDs:
(i)DVDR (writable) and (ii) DVDRW (Re-Writable)
DVD-ROMS (Digital Versatile Discs): These are read-only memory (ROM) discs that can be used
in a variety of ways. When compared to CD-ROMs, they can store a lot more data. It has a thick
DVD-R: It is a writable optical disc that can be used just once. It’s a DVD that can be recorded.
It’s a lot like WORM. DVD-ROMs have capacities ranging from 4.7 to 17 GB. The capacity of 3.5
inch disk is 1.3 GB.
3. Cache Memory: It is a type of high-speed semiconductor memory that can help the CPU run
faster. Between the CPU and the main memory, it serves as a buffer. It is used to store the data and
programs that the CPU uses the most frequently.
IEC UNITS :
COMPUTER HARDWARE :
Introduction :
Computer Hardware is the physical part of a computer, as distinguished from the computer
software that executes or runs on the hardware. The hardware of a computer is infrequently
changed, while software and data are modified frequently.
Motherboard
The motherboard is the body or mainframe of the computer, through which all other
components interface. It is the central circuit board making up a complex electronic system. A
motherboard provides the electrical connections by which the other components of the system
communicate. The mother board includes many components such as: central processing unit (CPU),
random access memory (RAM), firmware, and internal and external buses.
The Central Processing Unit (CPU; sometimes just called processor) is a machine that can
execute computer programs. It is sometimes referred to as the brain of the computer.
There are four steps that nearly all CPUs use in their operation: fetch, decode, execute,
and write back. The first step, fetch, involves retrieving an instruction from program memory. In the
decode step, the instruction is broken up into parts that have significance to other portions of the
CPU. During the execute step various portions of the CPU, such as the arithmetic logic unit (ALU) and
Firmware :
Firmware is loaded from the Read only
memory (ROM) run from the Basic Input-Output
System (BIOS). It is a computer program that is
embedded in a hardware device, for example a microcontroller. As it name suggests, firmware is
somewhere between hardware and software. Like software, it is a computer program which is
executed by a microprocessor or a microcontroller. But it is also tightly linked to a piece of
hardware, and has little meaning outside of it. Most devices attached to modern systems are special-
purpose computers in their own right, running their own software.
Power Supply :
The power supply as its name might suggest is the device that supplies power to all the
components in the computer. Its case holds a transformer, voltage control, and (usually) a cooling
fan. The power supply converts about 100-120 volts of AC power to low-voltage DC power for the
internal components to use. The most common computer power supplies are built to conform with
the ATX form factor. This enables different power supplies to be interchangeable with different
components inside the computer. ATX power supplies also are designed to turn on and off using a
signal from the motherboard, and provide support for modern functions such as standby mode.
CD :
CDs are the most common type of removable media. They are inexpensive but also have
short life-span. There are a few different kinds of CDs. CD-ROM which stands for Compact Disc read-
only memory are popularly used to distribute computer software although any type of data can be
stored on them. CD-R is another variation which can only be written to once but can be read many
times. CD-RW (rewritable) can be written to more than once as well as read more than once. Some
35 | G. Hemasundara Rao, M.C.A.,
other types of CDs which are not as popular include Super Audio CD (SACD), Video Compact Discs
(VCD), Super Video Compact Discs (SVCD), PhotoCD, PictureCD, CD-i, and Enhanced CD.
CD-ROM Drive :
There are two types of devices in a computer that use CDs: CD-ROM drive and a CD writer.
The CD-ROM drive used for reading a CD. The CD writer drive can read and write a CD. CD writers
are much more popular are new computers than a CD-ROM drive. Both kinds of CD drives are called
optical disc drives because the use a laser light or electromagnetic waves to read or write data to or
from a CD.
DVD :
DVDs (Digital Versatile Discs) are another popular optical
disc storage media format. The main uses for DVDs are video and
data storage. Most DVDs are of the same dimensions as compact
discs. Just like CDs there are many different variations. DVD-ROM
has data which can only be read and not written. DVD-R and
DVD+R can be written once and then function as a DVD-ROM. DVD-
RAM, DVD-RW, or DVD+RW hold data that can be erased and re-
written multiple times. DVD-Video and DVD-Audio discs
respectively refer to properly formatted and structured video and
audio content. The devices that use DVDs are very similar to the
devices that use CDs. There is a DVD-ROM drive as well as a DVD
writer that work the same way as a CD-ROM drive and CD writer.
There is also a DVD-RAM drive that reads and writes to the DVD-RAM variation of DVD.
Blu-ray
Blu-ray is a newer optical disc storage media format. Its main uses are high-definition video
and data storage. The disc has the same dimensions as a CD or DVD. The term “Blu-ray” comes from
the blue laser used to read and write to the disc. The Blu-ray discs can store much more data then
CDs or DVDs. A dual layer Blu-ray disc can store up to 50GB, almost six times thecapacity of a dual
layer DVD (WOW!). Blu-ray discs have similar devices used to read them and write to them as CDs
have. A BD-ROM drive can only read a Blu-ray disc and a BD writer can read and write a Blu-ray disc.
Floppy Disk
36 | G. Hemasundara Rao, M.C.A.,
A floppy disk is a type of data storage that is composed of a
disk of thin, flexible(“floppy”) magnetic storage medium encased in a
square or rectangular plastic shell. Floppy disks are read and written
by a floppy disk drive. Floppy disks are a dying and being replaced by
the optical and flash drives. Many new computers do not come with
floppy drives anymore but there are a lot of older ones with floppy
drives lying around. While floppy disks are very cheap the amount of
storage on them compared to the amount of storage for the price of
flash drives makes floppy disks unreasonable to use.
Internal Storage :
Internal storage is hardware that keeps data inside the computer for later use and remains
persistent even when the computer has no power. There are a few different types of internal
storage. Hard disks are the most popular type of internal storage. Solid-state drives have grown in
popularity slowly. A disk array controller is popular when you need more storage then a single har
disk can hold.
Input Stage :
The AC input supply signal 50 Hz is given directly to the rectifier and filter circuit
combination without using any transformer. This output will have many variations and the
capacitance value of the capacitor should be higher to handle the input fluctuations. This
unregulated dc is given to the central switching section of SMPS.
Switching Section
A fast switching device such as a Power transistor or a MOSFET is employed in this section,
which switches ON and OFF according to the variations and this output is given to the primary of
the transformer present in this section. The transformer used here are much smaller and lighter
ones unlike the ones used for 60 Hz supply. These are much efficient and hence the power
conversion ratio is higher.
Output Stage :
The output signal from the switching section is again rectified and filtered, to get the required DC
voltage. This is a regulated output voltage which is then given to the control circuit, which is a feedback
circuit. The final output is obtained after considering the feedback signal.
Control Unit
This unit is the feedback circuit
which has many sections. Let us have a
clear understanding about this from The
following figure.
Functioning of SMPS :
The SMPS is mostly
used where switching of
voltages is not at all a problem
and where efficiency of the
system really matters. There
are few points which are to be
noted regarding SMPS.
SMPS circuit is operated by
switching and hence the
voltages vary continuously.
The switching device is
operated in saturation or cut off mode.
The output voltage is controlled by the switching time of the feedback circuitry.
Switching time is adjusted by adjusting the duty cycle.
The efficiency of SMPS is high because, instead of dissipating excess power as heat, it
continuously switches its input to control the output.
Disadvantages :
There are few disadvantages in SMPS, such as
The noise is present due to high frequency switching.
The circuit is complex.
It produces electromagnetic interference.
Advantages :
The advantages of SMPS include,
The efficiency is as high as 80 to 90%
Less heat generation; less power wastage.
Reduced harmonic feedback into the supply mains.
The device is compact and small in size.
The manufacturing cost is reduced.
Provision for providing the required number of voltages.
Applications :
There are many applications of SMPS. They are used in the motherboard of computers,
mobile phone chargers, HVDC measurements, battery chargers, central power distribution, motor
vehicles, consumer electronics, laptops, security systems, space stations, etc.
DC to DC Converter
AC to DC Converter
Fly back Converter
Forward Converter
The AC to DC conversion part in the input section makes the difference between AC to DC
converter and DC to DC converter. The Fly back converter is used for Low power applications. Also
there are Buck Converter and Boost converter in the SMPS types which decrease or increase the
output voltage depending upon the requirements. The other type of SMPS include Self-oscillating
fly-back converter, Buck-boost converter, Cuk, Sepic, etc.
Types of Monitors :
Cathode Ray Tube (CRT) Monitors. It is a technology used in early monitors. ...
Flat Panel Monitors. These types of monitors are lightweight and take less space. ...
Touch Screen Monitors. These monitors are also known as an input device. ...
LED Monitors. ...
OLED Monitors. ...
DLP Monitors. ...
TFT Monitors. ...
Plasma Screen Monitors.
There are many different types of computer mice, but how do you decide which one is right for
your needs? Read our buyer guide to find the best mouse for you.
Wired Mouse
Wireless Mouse
Bluetooth Mouse
Comparing Wired vs. Wireless vs. Bluetooth Mice
Trackball Mouse
Laser Mouse
Comparing Trackball vs. Optical vs. Laser Mice
Magic Mouse
USB Mouse
Vertical Mouse
Gaming Mouse
Wired Mouse :
A wired mouse connects directly to your desktop or laptop, usually through a USB port, and
transmits information via the cord. The cord connection provides several key advantages. For
Wireless Mouse
Wireless mice transmit radio signals to a receiver connected to your computer. The
computer accepts the signal and decodes how the cursor was moved or what buttons were clicked.
While the freedom or range with wireless models is convenient, there are some drawbacks. The
decoding process.
Bluetooth Mouse
Wireless mouse designs and Bluetooth mouse designs tend to look very similar, as neither
need a wired connection to operate. Most wireless mice models use a dongle that connects to your
PC, and the mouse communicates back and forth in that manner. A Bluetooth mouse, however,
utilizes an internal Bluetooth connection on your PC, allowing you to connect the mouse to
multiple devices at a time.
Monitor :
A monitor is an electronic output device that is also known as
a video display terminal (VDT) or a video display unit (VDU). It is
used to display images, text, video, and graphics information
generated by a connected computer via a computer's video card.
Although it is almost like a TV, its resolution is much higher than a
TV. The first computer monitor was introduced on 1 March 1973,
which was part of the Xerox Alto computer system.
Older monitors were built by using a fluorescent screen and Cathode Ray Tube (CRT), which made
them heavy and large in size and thus causing them to cover more space on the desk. Nowadays,
all monitors are made up by using flat-panel display technology, commonly backlit with LEDs.
These modern monitors take less space on the desk as compared to older CRT displays.
History of Monitors
o In 1964, the Uniscope 300 machine included a built-in CRT display, which was not a true
computer monitor.
o A. Johnson invented the touch screen technology in 1965.
o On 1 March 1973, Xerox Alto computer was introduced, which had the first computer
monitor. This monitor included a monochrome display and used CRT technology.
o In 1975, George Samuel Hurst introduced the first resistive touch screen display, although
it was used only before 1982.
o In 1976, the Apple I and Sol-20 computer systems were introduced. These systems had a
built-in video port that allowed them to run a video screen on computer monitor.
o In 1977, James P. Mitchell invented LED display technology. But even 30 years later, these
monitors were not easily available to buy in the market.
o In June 1977, the Apple II was released, allowing for color display on a CRT monitor.
o In 1987, IBM released the IBM 8513, first VGA monitor.
o In 1989, VESA defined the SVGA standard for the display of computers.
Flat-panel monitor screens use two types of technologies, which are given below:
o Liquid Crystal Display: LCD (Liquid crystal display) screen contains a substance known as
liquid crystal. The particles of this substance are aligned in a way that the light located
backside on the screens, which allow to generate an image or block. Liquid crystal display
4. LED Monitors
It is a flat screen computer monitor, which stands for light-emitting diode display. It is
lightweight in terms of weight and has a short depth. As the source of light, it uses a panel of LEDs.
Nowadays, a wide number of electronic devices, both large and small devices such as laptop
screens, mobile phones, TVs, computer monitors, tablets, and more,
use LED displays.
It is believed that James P. Mitchell invented the
first LED display. On 18 March 1978, the first prototype of an LED
display was published to the market at the SEF (Science and
Engineering Fair) in Iowa. On 8 May 1978, it was shown again in
Anaheim California, at the SEF. This prototype received awards
from NASA and General Motors.
5. OLED Monitors
It is a new flat light-emitting display technology, which is more
efficient, brighter, thinner, and better refresh rates feature and contrast as
compared to the LCD display. It is made up of locating a series of organic
thin films between two conductors. These displays do not need a backlight
as they are emissive displays. Furthermore, it provides better image quality
ever and used in tablets and high-end smartphones.
Nowadays, it is widely used in laptops, TVs, mobile phones, digital cameras, tablets, VR headsets.
The demand for mobile phone vendors, more than 500 million AMOLED screens were produced in
2018. The Samsung display is the main producer of the AMOLED screen. For example, Apple is
using AMOLED OLED panel made by SDC in its 2018 iPhone XS - a 5.8" 1125x2436. Additionally,
iPhone X is also using the same AMOLED display.
6. DLP Monitors
DLP stands for Digital Light Processing, developed by Texas
Instruments. It is a technology, which is used for presentations by
projecting images from a monitor onto a big screen. Before
developing the DLP, most of the computer projection systems
produced faded and blurry images as they were based on LCD
technology. DLP technology utilizes a digital micromirror device,
which is a tiny mirror housed on a special kind of microchip.
7. TFT Monitors
It is a type of LCD flat panel display, which stands for a thin-film
transistor. In TFT monitors, all pixels are controlled with the help of one
USB-C: It is a plug and play interface, stands for Universal Serial Bus. It allows the computer to
communicate with peripheral and other devices. It is also able to send power to certain devices
like tablets and smart phones, including charging their batteries. In January 1996, the first version
of the Universal Serial Bus was released. Then, this technology was followed by Compaq, Intel,
Microsoft, and other companies.
Nowadays, there are several USB devices that can be connected to a computer such as Digital
Camera, Keyboard, Microphone, Mouse, Printer, Scanner, and more. Furthermore, USB connectors
are available in different shapes and sizes. The length of a USB cable used for high-speed devices
is 16 feet 5 inches (its maximum length), and 9 feet 10 inches is used for low-speed devices.
LCD LED
LCD monitors are not a subset of LED LED monitors are subset of LCD monitors.
monitors.
In LCDs, usually fluorescent lights are Usually, light-emitting diodes are located
located at the backside of the screen. around the edges or backside of the screen.
LCDs are less energy efficient than LEDs LEDs are more energy-efficient and are
and are thicker in size. much thinner in size as compared to LCDs.
Direct current can reduce the span life of Direct current does not have any effect on
LCDs. LEDs.
The switching time of LCD is slow. The switching time of LED is fast.
This device has the power handling electronic components that converts electrical power
efficiently. Switched Mode Power Supply uses a great power conversion technique to reduce
overall power loss.
SMPS working :
The SMPS device uses switching regulators that switches the load current on and off to
regulate and stabilize the output voltage. The average of the voltage between the off and on
produces the appropriate power for a device. Unlike the linear power supply, the pass transistor
of SMPS switches between low dissipation, full-on and full-off mode, and spends very less time in
the high-dissipation transitions, which minimizes wasted energy
• The switched-mode power supply is also called switch-mode power supply or switching-mode
power supply. Its efficiency is high. That’s why we use it in the variety of electronic types of
equipment which require a stable and efficient power supply.
• We can classify switched-mode power supply by the type of the input and output voltages.
The four major categories are as follows:
• AC to DC
• DC to DC
• DC to AC
• AC to AC
Working of switched-mode power supply
• Input rectifier stage: we make use of this stage to convert AC into DC, and the circuit which has
DC input does not require this stage. In this, the rectifier produces unregulated DC. We pass this
unregulated DC through the filter.
• Inverter stage: This stage converts DC into AC by running it through a power oscillator. The DC
supply can come either directly from the input or from the rectifier stage which is explained
above. The output transformer of power oscillator is tiny with few winding at a frequency of 10
or 100 KHz.
• Output transformer: if we want to isolate the output from the input, the inverted AC is used to
draw the primary winding of a high-frequency transformer. It converts the voltage up or down to
the required output level on its secondary winding.
• Regulation: in this, the output voltage is monitored by the feedback circuit and then compares
it with the reference voltage.
Classification of the switched-mode power supply
We can classify the switched-mode power supply by circuit topology. It can be of two types:
isolated and non-isolated typologies.
• Isolated typologies: This type of topology includes a transformer. Thus, it can produce an
output of higher or lower voltage than the input by adjusting the turns ratio. For some typologies,
we can place multiple winding on the transformer so that it can produce various output voltages.
While some converters make use of a transformer for the storage of energy, while others make
use of a separate inductor.
Various isolated typologies are as follows:
• Fly-back converter
• Forward converter
• Push-pull converter
• Half-bridge converter
• Full-bridge converter
• Non-isolated typologies: This type of topology make use of non-isolated converters, which are
the simplest. There are three basic types of non-isolated converters which make use of a single
inductor for storage of energy. In this, we assume the input voltage to be higher than 0. If it is
negative, we will negate the output voltage.
Various types of non-isolated typologies are as follows:
• Buck topology
• Boost topology
• Buck-Boost topology
• Split-pi topology
• SEPIC topology
• Cuk topology
Advantages of the switched-mode power supply
• Efficiency: in this, the little energy is dissipated in the form of heat as switching action is there
in this supply. That’s why its efficiency is high which is from 68% to 90%.
• Compact: the size of the switched-mode power supply is small. So, they can be made more
compact.
• Flexible technology: we can make use of this technology to provide high-efficiency voltage
conversions in voltage step up or “Boost” applications or step down or “Buck” applications.
• Noise: The biggest problem of the switched-mode power supply is the transient spikes that
occur in the switching action. These spikes can cause electromagnetic interference which can
affect other electronic types of equipment that are nearby.
• External components: we can also design a switch mode regulator using a single integrated
circuit, typically there is the requirement of external components. In some of the designs, the
series switch element may be incorporated within the integrated circuit, but where any current is
consumed, the series switch will be an external component. These external components require
space and add to the cost
• Expert design needed: There is the requirement of some experts so that it can perform the
necessary specifications.
• Prices: we have to make the careful considerations of the costs of a switched-mode power
supply before designing the system. If the additional filtration is required, then it adds to the
value of the system.
Applications of the switched-mode power supply
1: Input Stage
The AC input supply of frequency (50-60) Hz feds directly to the rectifier and
filter circuit. Its output contains many variations and the capacitance value of the capacitor
should be higher enough to handle the input fluctuations. Finally, the unregulated dc is given to
the central switching section of SMPS in order to regulate it. This section does not contain any
transformer for the step down in input voltage supply.
2: Switching Section
It consists of fast switching devices like a Power transistor or a MOSFET, which switches
ON and OFF according to the variations in the voltage. The output obtained is given to the primary
of the transformer which is present in this section.
The transformer used here is a much smaller, lighter, and highly effective one that steps down
voltage. These are much efficient compared to other step-down methods. Hence, the power
conversion ratio is higher.
3: Output Stage
The output that is derived from the switching section is again rectified and filtered. It uses a
rectification and filter circuit to get the desired DC voltage. The obtained regulated output
voltage is then given to the control circuit.
4: Control Unit
This unit is all about feedback, which has many sections contain in it. Lets see the brief
information about this section.
The inner control unit consists of an oscillator, amplifier, sensor, etc. The sensor senses
the output signal and feedback to the control unit. All the signals are isolated from each other so
that, any sudden spikes should not affect the circuitry. The reference voltage is given as one
51 | G. Hemasundara Rao, M.C.A.,
input along with the signal to the error amplifier. The amplifier is a comparator that compares
the signal with the required signal level.
The next stage is Controlling the chopping frequency. The final voltage level is controlled
by comparing the inputs given to the error amplifier, whose output helps to decide whether to
increase or to decrease the chopping frequency. The oscillator produces a standard PWM wave
with a fixed frequency.
The SMPS is mostly used where switching of voltages is not at all a problem, but where the
efficiency of the system really matters. The design and working of SMPS of based on the same
concept.
UPS systems can generally be classified as being one of these five types:
Standby UPS
Line-interactive UPS
Standby-ferro UPS
Double conversion online UPS
Delta conversion online UPS
Note that these types are based on a demand for an AC power backup for the load.
Standby UPS
A standby UPS is a configuration in which a battery backup is charged by the line voltage
and is fed through an inverter to a transfer switch. When the prime power is lost, the transfer
switch brings the standby power path online (represented in figure 1 below as the lower path
with the dashed line). The inverter is generally not active until there is a power failure, and hence
the term ‘standby” is used to describe this type of UPS.
Line-interactive UPS
One of the most commonly used designs for an
uninterruptable power supply is the line-interactive UPS,
presented in Figure 2 below. With the line interactive
design, prime power is fed through a transfer switch to
an inverter and then out to the load. The inverter in this
design is always active and when prime power is on, it
operates in reverse to convert incoming AC power to DC
which is used to keep the backup battery charged. If the
line power goes out the transfer switch opens and the
inverter works in the normal direction, taking the DC
power from the battery and converting it to AC to supply to the load.
Depending on the inverter design, this configuration can provide two independent power
paths for the load and eliminates the inverter as a single point of failure. So even if the inverter
were to fail, AC power can still flow to the output. This type of UPS offers low cost, high reliability,
and high efficiency, and can support low or high voltage applications.
Standby-ferro UPS
The standby-ferro UPS uses a three winding transformer to couple the load to the power
source, as shown below in Figure 3. Prime power flows through a transfer switch that is normally
closed to the coils in the transformer where it couples to the secondary coil of the transformer and
then supplies the power to the output load. The backup power path takes line voltage to a battery
charger and maintains the backup battery, which then connects to an inverter that joins the third
coil of the transformer.
When the prime power fails, the transfer switch opens and the inverter supplies power to
the load from the backup battery. In this design configuration, the inverter is in standby and
becomes active when the prime power fails, and the transfer switch is opened.
The transformer, while providing isolation of the load from line voltage transients, can
create output voltage distortion and transients of its own possibly worse than those from a poor
AC connection. Additionally, the inefficiencies of the ferro transformer may result in the
53 | G. Hemasundara Rao, M.C.A.,
generation of a significant amount of heat, on top of which they are quite large and heavy, making
standby-ferro UPS systems also bulky as a result.
A static bypass switch is available but is not activated in the event of failure of the AC prime
power. The battery power will seamlessly feed the inverter should the AC mains fail, resulting in a
design that results in no transfer time in the event of a power loss. Because the inverter and
rectifier are continuously active in this design, there is reduced reliability of the electrical
components versus other designs. But from the perspective of the electrical power, this type of
UPS delivers ideal power output performance.
UPS stands for the Uninterrupted Power Source. As the name implies, it is used to provide a
continuous power supply to the load using an automatic switching method; to prevent the device
and equipment from damage or preventing the plant from going into a shutdown mode. There are
many devices that require a safe shutdown for proper operation; otherwise, sudden power loss
can damage the equipment.
Every UPS has a semiconductor static switch, which is used to switch power between the
main AC supply and batteries to the load. Failure of this switch can cause the UPS to be worthless
because it will damage the working of UPS.
3. Keep detailed records. In addition to scheduling maintenance, you should also keep
records of the kinds of maintenance performed (for instance, cleaning, repair or replacement of
certain components) and the condition of the equipment during inspection. Keeping track of costs
can also be beneficial when you need to show the C-suite that a few dollars in maintenance costs
beats thousands or millions in downtime costs every time. A checklist of tasks, such as inspecting
batteries for corrosion, looking for excessive torque on connecting leads and so on, helps maintain
an orderly approach.
4. Perform regular inspections. Much of the above can apply to almost any part of the data
center: enforcing safety, scheduling maintenance and keeping good records are all excellent
practices regardless of the data center context. A few important UPS maintenance tasks include
the following:
o Visually inspect of the area around UPS and battery (or other energy-storage)
equipment for obstructions and proper cooling.
o Ensure no operating abnormalities or warnings have registered on the UPS panel, such
as an overload or a battery near discharge.
o Look over batteries for signs of corrosion or other defects.
5. Recognize that UPS components will fail. This may seem obvious: anything with a finite
probability of failure will fail eventually. Eaton notes that “critical [UPS] components such as
batteries and capacitors wear out from normal use,” so even if your utility provides perfect power,
your UPS room is perfectly clean and consistently at the proper temperature, and everything is
running ideally, components will still fail. Your (yes, your) UPS system requires maintenance.
6. Know whom to call when you need service or unscheduled maintenance. During daily
or weekly inspections, problems can arise that may not be able to wait until the next scheduled
maintenance. In these cases, knowing whom to call can save a lot of stress. That means you must
identify solid service providers that will be available when you need them (i.e., at odd hours).
7. Assign tasks. “Weren’t you supposed to check that last week?” “No, I thought you were.”
Avoid this mess: ensure that the appropriate personnel know their responsibilities when it comes
to UPS maintenance. Who checks the equipment weekly? Who calls to schedule (or perhaps adjust
the schedule for) annual maintenance with the service provider?
TROUBLESHOOTING UPS :
The UPS will not turn on or The unit has not been turned Press the ON button once to turn on the UPS.
there is no output on. Please note that the LCD screen may be lit
even though the UPS is OFF.
The UPS is not connected to Be sure that the power cable is securely
utility power. connected to the unit and to the utility power
supply.
The input circuit breaker has Reduce the load to the UPS, disconnect
tripped. nonessential equipment and reset the circuit
breaker.
The unit shows very low or Check the utility power supply to the UPS by
no input utility voltage plugging in a table lamp. If the light is very
dim, check the utility voltage.
The battery connector plug Be sure that all battery connections are secure.
is not securely connected.
There is an internal UPS Do not attempt to use the UPS. Unplug the UPS
fault. and have it serviced immediately
The UPS is operating on The input circuit breaker has Reduce the load to the UPS, disconnect
battery, while connected tripped. nonessential equipment and reset the circuit
to utility power breaker.
There is very high, very low, Move the UPS to a different outlet on a
or distorted input line different circuit. Test the input voltage with
voltage. the utility voltage display. If acceptable to the
connected equipment, reduce the UPS
sensitivity.
UPS is beeping The UPS is in normal The UPS display will indicate the current
operation. operating mode that is causing the beeping.
(On battery, replace battery, etc..)
UPS does not provide The UPS battery is weak due Charge the battery. Batteries require
expected backup time to a recent outage or is near recharging after extended outages and wear
the end of its service life. out faster when put into service often or when
operated at elevated temperatures.
All LEDs are illuminated The UPS has shut down and None. The UPS will return to normal operation
and the UPS is plugged the battery has discharged when the power is restored and the battery
into a wall outlet from an extended outage. has a sufficient charge.