0% found this document useful (0 votes)
96 views77 pages

CCPS Module - 1

The document discusses the evolution of computers over several generations from the 1940s to present day. First generation computers from 1940-1956 were very large, used vacuum tubes, and had limited abilities. The introduction of transistors in the 1950s led to smaller, faster second generation computers. Integrated circuits in the 1960s increased speed and power while decreasing size for third generation computers. The development of microprocessors in the 1970s marked the beginning of personal computers and fourth generation computing that continues to today with increasingly powerful yet smaller devices.

Uploaded by

sofia janit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
96 views77 pages

CCPS Module - 1

The document discusses the evolution of computers over several generations from the 1940s to present day. First generation computers from 1940-1956 were very large, used vacuum tubes, and had limited abilities. The introduction of transistors in the 1950s led to smaller, faster second generation computers. Integrated circuits in the 1960s increased speed and power while decreasing size for third generation computers. The development of microprocessors in the 1970s marked the beginning of personal computers and fourth generation computing that continues to today with increasingly powerful yet smaller devices.

Uploaded by

sofia janit
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 77

MODULE - 1

FUNDAMENTALS OF COMPUTERS
EVOLUTION OF COMPUTERS
Introduction to Computer System:
A computer is an electronic device that can be programmed to accept data (input), process it and generate result
(output). A computer along with additional hardware and software together is called a computer system. A
computer system primarily comprises a central processing unit (CPU), memory, input/output devices and storage
devices. All these components function together as a single unit to deliver the desired output. A computer system
comes in various forms and sizes. It can vary from a high-end server to personal desktop, laptop, tablet computer,
or a smartphone. Figure 1.1 shows the block diagram of a computer system. The directed lines represent the flow
of data and signal between the components.
The Von Neumann architecture is shown in Figure 1.5. It consists of a Central Processing Unit (CPU) for processing
arithmetic and logical instructions, a memory to store data and programs, input and output devices and communication
channels to send or receive the output data. Electronic Numerical Integrator and Computer (ENIAC) is the first binary
programmable computer based on Von Neumann architecture. During the 1970s, Large Scale Integration (LSI) of electronic
circuits allowed integration of complete CPU on a single chip, called microprocessor. Moore’s Law predicted exponential
growth in the number of transistors that could be assembled in a single microchip. In 1980s, the processing power of
computers increased exponentially by integrating around 3 million components on a small-sized chip termed as Very Large-
Scale Integration (VLSI). Further advancement in technology has made it feasible to fabricate high density of transistors
and other components (approx 106 components) on a single IC called Super Large Scale Integration (SLSI) as shown in
Figure 1.6. IBM introduced its first personal computer (PC) for the home user in 1981 and Apple introduced Macintosh
machines in 1984. The popularity of the PC surged by the introduction of Graphical User Interface (GUI) based operating
systems by Microsoft and others in place of computers with only command line interface, like UNIX or DOS. Around
1990s, the growth of World Wide Web (WWW) further accelerated mass usage of computers and thereafter computers have
become an
indispensable part of everyday life.
Data Size:
The difference between a bit and byte: A bit is that binary 1 or 0. And a byte is a collection of 8 bits. Both
are tiny sets of data. Everything is expressed in bytes — numbers, letters, and symbols They are ordered by
size (many of which you’ve heard of), and basically describes how many bits or bytes are in each set of
data.
A microprocessor is a computer processor wherein the data processing logic and control is included on a single integrated
circuit, or a small number of integrated circuits. The microprocessor contains the arithmetic, logic, and control circuitry
required to perform the functions of a computer's central processing unit.
•Control Unit –A control unit (CU) handles all processor control signals. It directs all input and output flow, fetches code for
instructions and controlling how data moves around the system.
•Arithmetic and Logic Unit (ALU) –The arithmetic logic unit is that part of the CPU that handles all the calculations the
CPU may need, e.g. Addition, Subtraction, Comparisons. It performs Logical Operations, Bit Shifting Operations, and
Arithmetic Operation.
•Main Memory Unit (Registers) –
• Accumulator: Stores the results of calculations made by ALU. An accumulator is a type of register included in a 
CPU. It acts as a temporary storage location which holds an intermediate value in mathematical and logical
calculations.
• Program Counter (PC): Keeps track of the memory location of the next instructions to be dealt with. The PC then
passes this next address to Memory Address Register (MAR).
• Memory Address Register (MAR): It stores the memory locations of instructions that need to be fetched from
memory or stored into memory.
• Memory Data Register (MDR): It stores instructions fetched from memory or any data that is to be transferred to,
and stored in, memory.
• Current Instruction Register (CIR): It stores the most recently fetched instructions while it is waiting to be coded
and executed.
• Instruction Buffer Register (IBR): The instruction that is not to be executed immediately is placed in the instruction
buffer register IBR.
•Input/Output Devices – Program or data is read into main memory from the input device or secondary storage under the
control of CPU input instruction. Output devices are used to output the information from a computer. If some results are
evaluated by computer and it is stored in the computer, then with the help of output devices, we can present it to the user.
•Buses – Data is transmitted from one part of a computer to another, connecting all major internal components to the CPU and
memory, by the means of Buses. Types:
• Data Bus: It carries data among the memory unit, the I/O devices, and the processor.
• Address Bus: It carries the address of data (not the actual data) between memory and processor.
• Control Bus: It carries control commands from the CPU (and status signals from other devices) in order to control
and coordinate all the activities within the computer.
Further, with the introduction of laptops, personal computing was made portable to a great extent. This was followed by
smartphones, tablets and other personal digital assistants. These devices have leveraged the technological advancements in
processor miniaturization, faster memory, high speed data and connectivity mechanisms. The next wave of computing
devices includes the wearable gadgets, such as smart watch, lenses, headbands, headphones, etc. Further, smart appliances
are becoming a part of the Internet of Things (IoT), by leveraging the power of Artificial Intelligence (AI).

• Computers in the form of personal desktop computers, laptops and tablets have become such an important part of
everyday living that it can be difficult to remember a time when they did not exist. In reality, computers as they are
known and used today are still relatively new. Although computers have technically been in use since the abacus
approximately 5000 years ago, it is modern computers that have had the greatest and most profound effect on society. The
first full-sized digital computer in history was developed in 1944. Called the Mark I, this computer was used only for
calculations and weighed five tons. Despite its size and limited ability, it was the first of many that would start off
generations of computer development and growth.
First Generation Computers
First generation computers bore little resemblance to computers of today, either in appearance or performance. The first
generation of computers took place from 1940 to 1956 and was extremely large in size. The inner workings of the
computers at that time were unsophisticated (simple). These early machines required magnetic drums for memory and
vacuum tubes that worked as switches and amplifiers. It was the vacuum tubes that were mainly responsible for the large
size of the machines and the massive amounts of heat that they released. These computers produced so much heat that they
regularly overheated despite large cooling units. First generation computers also used a very basic programming language
that is referred to as machine language.
Second Generation Computers
The second generation (from 1956 to 1963) of computers managed to do away with vacuum tubes in lieu of transistors. This
allowed them to use less electricity and generate less heat. Second generation computers were also significantly faster than
their predecessors. Another significant change was in the size of the computers, which were smaller. Transistor computers
also developed core memory which they used alongside magnetic storage.
It is an electronic device used in many older model radios, television sets, and amplifiers to control electric current flow. 
Third Generation Computers
From 1964 to 1971 computers went through a significant change in terms of speed, courtesy of integrated circuits.
Integrated circuits, or semiconductor chips, were large numbers of miniature transistors packed on silicon chips. This not
only increased the speed of computers but also made them smaller, more powerful, and less expensive. In addition, instead
of the punch cards and the printouts of previous systems, keyboards and monitors were now allowing people to interact
with computing machines.
Fourth Generation Computers
The changes with the greatest impact occurred in the years from 1971 to 2010. During this time technology developed to a
point where manufacturers could place millions of transistors on a single circuit chip. This was called monolithic (huge)
integrated circuit technology. It also heralded the invention of the Intel 4004 chip which was the first microprocessor to
become commercially available in 1971. This invention led to the dawn of the personal computer industry.
A transistor is a type of device that can be used to both conduct and insulate (isolate) electric current or voltage. 
By the late 70s and early 80s assembled personal computers for home use, such as the Commodore Pet, Apple II and the
first IBM computer, were making their way onto the market. Personal computers and their ability to create networks
eventually would lead to the Internet in the early 1990s. The fourth generation of computers also saw the creation of even
smaller computers including laptops and hand-held devices. Graphical user interface, or GUI, was also invented during
this time. Computer memory and storage also went through major improvements, with an increase in storage capacity and
speed.
GUI - A graphics-based operating system interface that uses icons, menus and a mouse (to click on the icon or pull down
the menus) to manage interaction with the system.
The Fifth Generation of Computers
Computer users can expect even faster and more advanced computer technology. Computers continue to develop into
advanced forms of technology. Fifth generation computing has yet to be truly defined, as there are numerous paths that
technology is taking toward the future of computer development. For instance, research is ongoing in the fields of
nanotechnology, artificial intelligence etc.
INPUTS/OUTPUTS

Central Processing Unit (CPU) It is the electronic circuitry of a computer that carries out the actual processing and usually
referred as the brain of the computer. It is commonly called processor also. Physically, a CPU can be placed on one or more
microchips called integrated circuits (IC). The Ics comprise semiconductor materials. The CPU is given instructions and data
through programs. The CPU then fetches the program and data from the memory and performs arithmetic and logic operations
as per the given instructions and stores the result back to memory. While processing, the CPU stores the data as well as
instructions in its local memory called registers. Registers are part of the CPU chip, and they are limited in size and number.
Different registers are used for storing data, instructions or intermediate results. Other than the registers, the CPU has two
main components — Arithmetic Logic Unit (ALU) and Control Unit (CU). ALU performs all the arithmetic and logic
operations that need to be done as per the instruction in a program.
CU controls sequential instruction execution, interprets instructions and guides data flow through the computer’s memory,
ALU and input or output devices. CPU is also popularly known as microprocessor. We will study more about it in section 1.5.
Input Devices
The devices through which control signals are sent to a computer are termed as input devices. These devices convert the input
data into a digital form that is acceptable by the computer system. Some examples of input devices include keyboard, mouse,
scanner, touch screen, etc., as shown in Figure 1.2. Specially designed braille keyboards are also available to help the visually
impaired for entering data into a computer. Besides, we can now enter data through voice, for example, we can use Google
voice search to search the web where we can input the search string through our voice. Data entered through input device is
temporarily stored in the main memory (also called RAM) of the computer system. For permanent storage and future use, the
data as well as instructions are stored permanently in additional storage locations called secondary memory.
Output Devices
The device that receives data from a computer system for display, physical production, etc., is called output device. It
converts digital information into human understandable form. For example, monitor, projector, headphone, speaker, printer,
etc. Some output devices are shown in Figure 1.3. A braille display monitor is useful for a visually challenged person to
understand the textual output generated by computers. A printer is the most commonly used device to get output in physical
(hardcopy) form. Three types of commonly used printers are inkjet, laserjet and dot matrix. Now-a-days, there is a new type
of printer called 3D-printer, which is used to build physical replica of a digital 3D design. These printers are being used in
manufacturing industries to create prototypes of products. Their usage is also being explored in the medical field,
particularly for developing body organs.
Speakers
A hardware device connected to a computer’s sound card that outputs sounds generated by the card.

Printer
Printers are used for producing output on paper. There are a large variety of printing devices which can be classified
according to the print quality and the printing speed.
There are many types of printers, which are classified based on various criteria as illustrated in
Figure
Type of Printers:
 Impact printers: In these types of printers, printing takes place by striking at the paper through a ribbon soaked in ink.
The impression on the paper is created by pressing a rigid object over an ink-soaked ribbon. Involve mechanical
components for conducting printing. 
 Non-impact printers: In this type of printers there is no physical touch of any object on the paper. On the basis of how
much is printed in a single operation. Non-Impact printers, no mechanical moving component is used. 
Character printers: These types of printers are capable of printing one character or letter at a time. The conventional
typewriter also types a letter at a time.
Line printers: These printers can print one line of characters or letters in one go. Line printers are obviously much faster
than character printers. However, they are more expensive than character printers.

 Page printers: As the name suggests page printers can print the whole page at one time. These printers are fastest but
costliest at the same time.
 Dot Matrix Printer (High Speed Printer): Dot matrix printers are impact printers. The printer head has an assembly
of a fixed number of pins and tiny hammers. These hammers strike at the pins, which in turn make impression of dots on
the paper loaded in the printer through an ink-soaked ribbon.
 Inkjet printer: This type of printer belongs to non-impact class. Characters are formed as a result of electrically charged
or heated ink being sprayed in fine jets onto the paper. The printer head consists of nozzles. Individual nozzles produce
high resolution (up to 400 dots per inch or 400 dpi) dot matrix characters. Magnetized plates in the ink’s path direct the
ink onto the paper in the desired shape. Inkjet printers are capable of producing high quality print approaching that
produced by laser printers.

 Laser printer: This printer also belongs to non-impact type. A drum is initially electrically charged and then a high
intensity laser beam is used to discharge selected areas on the drum where nothing is to be printed. These discharged areas
correspond to the white area of the printed document. Dry ink or toner is attracted to parts of the drum with a high charge.
The drum rotates and transfers the toner to the paper which has an even greater electrical charge. Finally a heater fixes the
toner onto the paper. After printing this page the drum is cleaned by de-electrifying it. The process continues for another
page. These printers are fast (6-8 pages per minute) but are costliest of all the printers.
Color Printers: Color printers are the new standard in digital color printing. The complete family of digital color printers
effectively balances low total cost of ownership with high levels of performance and productivity to directly benefit your
bottom line. Compared with other color laser printers, the new C5000, C7000 and C9000 Series printers employ our
innovative Digital LED and OKI Single Pass Color technologies. These technologies allow our digital printers to deliver
high-speed, high-resolution, high-quality print output, and unmatched media flexibility, unsurpassed in the industry. Color
printers from OKI Printing Solutions enable you to create high-impact color reports, presentations, mailers, banners and
many other types of printed business communications, without the expense of using outside commercial printers.
ALTERNATIVE METHODS OF INPUT
Keyboard:
The keyboard is one of the most common input devices of computers. The layout of the keyboard is like that of the traditional
QWERTY typewriter. Most keyboards have three sections of keys:
the standard typing keys, cursor movement keys and the numeric keypad. To help users enter numbers quickly, keyboard
provides a numeric keypad that functions much like a 10-key calculator. To select the numeric keypad’s operation, we need to
use the Num Lock key. Most PCs come with a standard keyboard, which is flat in appearance. Keyboard’s keys match with
that of a typewriter. The keyboard also includes special purpose function keys, arrow and cursor control keys. The layout of a
simple keyboard can be seen in the Figure.
Mouse
While working with a display we are mostly pointing to some area in the display to select an option and move across on the
screen to select subsequent options. Mouse is a handy device which can be moved on a smooth surface to simulate the
movement of cursor that is desired on the display screen. An optical mouse offers quiet and reliable operation. But a
mechanical mouse is much cheap.
Within Windows, user will make extensive use of mouse operations, which he/she refers to as “point and click operations.”
To point the mouse, simply aim the mouse pointer that appears on the screen at the object of desire. To move mouse pointer
across the screen, simply move the mouse across the desk. To click mouse, press and release the mouse-select button
(normally the left-mouse button). Many operations within Windows require that double-click mouse by pressing and
releasing the mouse-select button two times in quick succession. For notebook PCs, touch pad, track ball or other pointer
devices are also available. They are assisted with additional keys for clicking. To reduce the strain on wrist there is pad,
which elevates the wrist while using the keyboard or mouse.
Trackball
A trackball looks like a mouse, as the roller is on the top with selection buttons on the side. It is also a pointing device
used to move the cursor and works like a mouse. For moving the cursor in a particular direction, the user spins the ball in
that direction. It is sometimes considered better than mouse, because it requires little arm movement and less desktop
space. It is generally used with portable computers.
Light Pen
Light pen is a pointing device, which is used to select a displayed menu item or draw pictures on the monitor screen. It’s a
pen shaped device allowing natural movement on the screen. The pen contains the light receptor and is activated by
pressing the pen against the display screen. Receptor is the scanning beam which helps in locating the pen’s position.
Suitable system software is provided to initiate necessary action when we locate an area on the display surface with the help
of the light pen. It consists of a photocell and an optical system placed in a small tube. When its tip is moved over the
monitor screen and the pen button is pressed, its photocell sensing element detects the screen location and sends the
corresponding signal to the CPU.
Touch Screen
Touch screen is sensitive to human fingers. Using this device, the user can point to a selection on the screen instead of
pressing keys.

Game Controller (joystick)


A game controller is a device used with games or entertainment system used to control a playable character or object, or
otherwise provide input in a computer game. A controller is typically connected to a game console or computer by means of
a wire, cord or nowadays, by means of wireless connection. Controllers which have been classified as games controllers are
keyboards, mice, game pads, joysticks, etc.
Optical Bar Code Reader (OBR)
It can scan a set of vertical bars of different widths for specific data and is used to read tags and merchandise in stores,
medical records, library books, etc.

Image Scanner
Scanner is mainly used in Desktop Publishing applications. Scanner is used for digitizing images such as photographs, forms,
documents, etc., in computer memory. Some scanners can also read text by converting them to digital code. These scanners
are very useful for converting the typed pages into word-processing files.
Optical Character Reader (OCR)
It is also an optical scanner, which is capable of detecting alphanumeric characters typed or printed on paper using an OCR
font. OCR devices are used for large volume applications like reading of passenger tickets; computer printed bills of credit card
companies and reading of ZIP codes in postal services. The text, which is to be scanned, is illuminated by a low-frequency
light source. The dark areas on the text absorb the light while light areas reflect it. The photocells of OCR device receive this
reflected light and provide binary data corresponding to dark and light areas.
Microphone
Sometimes abbreviated as mic, a microphone is a hardware peripheral that allows computer users to input audio into their
computers.

Voice-input Devices
These devices can recognize the human voice. Voice recognition techniques, along with several other techniques to convert
the voice signals to appropriate words and desire the correct meaning of words, are comprehensive speech recognition
system. Today devices are available to recognize and interpret human voices within a limited scope of operation. They seem
to be very useful but are not popular due to storage of limited vocabularies and variations in the way of pronouncing words
by different persons.
Digital Camera
A type of camera that stores the pictures or video it takes in electronic format instead of to film. There are several features
that make digital cameras a popular choice when compared to film cameras. First, the feature often enjoyed the most is the
LCD display on the digital camera. This display allows users to view photos or video after the picture or video has been
taken, which means if you take a picture and don’t like the results, you can delete it; or if you do like the picture, you can
easily show it to other people. Another nice feature with digital cameras is the ability to take dozens, sometimes hundreds of
different pictures.
ORGANIZATION OF MODERN DIGITAL COMPUTERS
Digital Computer
The digital computer is a digital system that performs various computational tasks. The word digital implies that the
information in the computer is represented by variables that take a limited number of discrete values. These values are
processed internally by components that can maintain a limited number of discrete states. The decimal digits 0, 1, 2…9, for
example, provide 10 discrete values. The first electronic digital computers, developed in the late 1940s, were used
primarily for numerical computations. In this case the discrete elements are the Digits. From this application the term
digital computer has emerged. In practice, digital computers function more reliably if only two states are used. Because of
the physical restriction of components, and because human Logic tends to be binary (i.e., true-or-false, yes or- no
statements digital components that are constrained to take discrete values are further constrained to take only two values
and are said to be binary.
Digital computers use the binary number system, which has two digits: 0 and 1. A binary digit is called a bit. Information is
represented in digital computers in groups of bits.
By using various coding techniques, groups of bits can be made to represent not only binary numbers but also other discrete
symbols, such as decimal digits or letters of the alphabet. By judicious use of binary arrangements and by using various coding
techniques, the groups of bits are used to develop complete sets of instructions for performing various types of computations.
Computer Types
The different types of computers are
1. Personal computers: - This is the most common type found in homes, schools, Business offices etc., It is the most common
type of desk top computers with processing and storage units along with various input and output devices.
2. Note book computers: - These are compact and portable versions of PC
3. Work stations: - These have high resolution input/output (I/O) graphics capability, but with same dimensions as that of
desktop computer. These are used in engineering applications of interactive design work.
4. Enterprise systems: - These are used for business data processing in medium to large corporations that require much more
computing power and storage capacity than work stations. Internet associated with servers have become a dominant worldwide
source of all types of information.
5. Super computers: - These are used for large scale numerical calculations required in the applications like weather forecasting
etc.
Digital Computers: Computer Architecture
Computer Architecture is concerned with the structure and behaviour of the computer as seen by the user. It includes the
information, formats, the instruction set, and techniques for addressing memory. The architectural design of a computer system
is concerned with the specifications of the various functional modules, such as processors and memories, and structuring them
together into a computer system.
Two basic types of computer architecture are:
• von Neumann architecture
• Harvard architecture
1. von Neumann architecture
The von Neumann architecture describes a general framework, or structure, that a computer's hardware, programming, and
data should follow. Although other structures for computing have been devised and implemented, the vast majority of
computers in use today operate according to the von Neumann architecture.
von Neumann envisioned the structure of a computer system as being composed of the following components:
Basic Computer Model and Functional Units of a Computer
The model of a computer can be described by four basic units in high level abstraction. These basic units are:
• Central Processor Unit
• Input Unit
• Output Unit
• Memory Unit
Central processor unit consists of two basic blocks:
 The program control unit has a set of registers and control circuit to generate control signals.
 The execution unit or data processing unit contains a set of registers for storing data and an Arithmetic and Logic Unit
(ALU) for execution of arithmetic and logical operations. Most of the computer operators are executed in ALU of the
processor like addition, subtraction, division, multiplication, etc. the operands are brought into the ALU from memory and
stored in high speed storage elements called register. Then according to the instructions the operation is performed in the
required sequence. The control and the ALU are many times faster than other devices connected to a computer system. This
enables a single processor to control a number of external devices such as key boards, displays, magnetic and optical disks,
sensors and other mechanical controllers.
Control unit: It effectively is the nerve center that sends signals to other units and senses their states. The actual timing
signals that govern the transfer of data between input unit, processor, memory and output unit are generated by the control
unit.
Input Unit: With the help of input unit data from outside can be supplied to the computer. Program or data is read into main
storage from input device or secondary storage under the control of CPU input instruction. Example of input devices:
Keyboard, Mouse etc.
Output Unit: With the help of output unit computer results can be provided to the user or it can be stored in storage device
permanently for future use. Output data from main storage go to output device under the control of CPU output instructions.
Example of output devices: Printer, Monitor, Plotter, Hard Disk, Floppy Disk etc.
Memory Unit:
Memory unit is used to store the data and program. CPU can work with the information stored in memory unit. This memory
unit is termed as primary memory or main memory module. These are basically semiconductor memories. There are two
types of semiconductor memories -
● Volatile Memory : RAM (Random Access Memory).
● Non-Volatile Memory: ROM (Read only Memory), PROM (Programmable ROM) EPROM (Erasable PROM), EEPROM
(Electrically Erasable PROM)
Secondary Memory:
There is another kind of storage device, apart from primary or main memory, which is known as secondary memory.
Secondary memories are non-volatile memory, and it is used for permanent storage of data and program.
Example of secondary memories:
Hard Disk, Floppy Disk, Magnetic Tape ------ These are magnetic devices,
CD-ROM ------ is optical device
Thumb drive (or pen drive) ------ is semiconductor memory.
Harvard architecture
The Harvard architecture uses physically separate storage and signal pathways for their instructions and data. The term
originated from the Harvard Mark I and the data in relay latches (23- digits wide).
In a computer with Harvard architecture, the CPU can read both an instruction and data from memory at the same time,
leading to double the memory bandwidth.
Microcontroller(single-chip microcomputer)-based computer systems and DSP(Digital Signal Processor)-based computer
systems are examples of Harvard architecture.
OPERATING SYSTEM
The operating system tells the computer how to interact with the user and how to use the hardware devices attached to the
computer. An operating system tells the computer how to use its own. Examples of operating systems include Windows, the
Macintosh Operating System, and Linux. An operating system is essential for any computer; because it acts as an interpreter
between the hardware, application programs, and the use When a program wants the hardware to do something, it
communicates through the operating system. Similarly, when you want the hardware to do something (such as copying or
printing a file), your request is handled by the operating system.
• A network operating system allows computers to communicate and share data across a network while controlling network
operations and overseeing the network’s security.
• A utility is a program that makes the computer system easier to use or performs highly specialized functions. Utilities are
used to manage disks, troubleshoot hardware problems, and perform other tasks that the operating system itself may not
be able to do. Modem operating systems support the running of many programs, or multitasking. The CPU may be asked to
perform tasks for more than one program.
To make this work, the OS and the CPU create threads.
A thread is one instruction from a program. The CPU will execute one thread from a program at one time. Since the
CPU can perform each thread quickly, the user thinks that each program is being run at the same time. Newer processors
support hypers reading. Hyperthreading allows multiple threads to be executed at one time.
How the Operating System Finds Data on a Disk
A computer’s operating system can locate data on a disk because each track and each sector are labelled, and the location
of all data is kept in a special log on the disk. The labelling of tracks and sectors is called logical formal ting. Different
operating systems can format disks in different ways. Each formatting method configures the disk’s surface in a different
manner; resulting in a different file system—a logical method for managing the storage of data on a disk’s surface. A
commonly used logical format performed by Windows is called the FAT file system because it relies on a standardized file
allocation table (FAT) to keep track of file locations on the disk. When a diskette is formatted with the FAT file system,
four areas are created on the disk.
When a diskette is formatted with the FAT file system, four areas are created on the disk.
• The boot sector contains a program that runs when you first start the computer. This program determines whether the disk
has the basic components that are necessary to run the operating system successfully. If the program determines that the
required files are present and the disk has a valid format, it transfers control to one of the operating system programs that
continues the process of starting up. This process is called booting, because the boot program makes the computer “ pull
itself up by its own bootstraps.” The boot sector also contains information that describes other disk characteristics, such as
the number of bytes per sector and the number of sectors per track— information that the operating system needs to access
data on the disk.
• The file allocation table (FAT) is a log that records the location of each file and the status of each sector. When you write a
file to a disk, the operating system checks the FAT to find an open area, stores the file, and then logs the file’s identity
and its location in the FAT. When a program needs to locate data on the disk, the operating system checks the FAT to see
where that data is stored. During formatting, two copies of the FAT are created; both copies are always maintained to keep
their information current.
• The root folder is the “master folder" on any disk. A folder (also called a directory) is a tool for organizing files on a disk.
Folders can contain files or other folders, so it is possible to set up a hierarchical system of folders on your computer, just
as you can have folders within other folders in a file cabinet. The topmost folder is known as the root, but may also be
called the root folder or root directory. This is the folder that holds all the information about all the other folders on the
disk. When you use the operating system to view the contents of a folder, the operating system lists specific information
about each file in the folder such as the file's name, its size, the time and date that it was created or last modified, and so
on.
• The data area is the part of the disk that remains free after the boot sector, the FAT, and the root folder have been
created. This is where data and program files are actually stored on the disk. During logical formatting, the operating
system also groups sectors together, into storage units called clusters. A cluster, therefore, is simply a group of sectors that
the OS sees as a single unit. A cluster is the smallest space an OS will allocate to a single file, and a cluster may store an
entire file or just part of a file. Cluster sizes vary, depending on the size and type of the disk, but they can range from four
sectors for diskettes to 64 sectors for some hard disks. Cluster usage is tracked in the file allocation table.
Different operating systems use different file systems:
 File Allocation Table (FAT). This file system, which is also known as FAT 16, was used in MS-DOS and was the basis
for the early Windows operating systems.
In fact, all versions of Windows support FAT, although it is no longer the preferred file system; newer file systems offer
better security and greater flexibility in managing files. FAT32. Introduced in Windows 95, FAT32 is an extended edition of
the original FAT file system, providing better performance than FAT. It continues to be supported in Windows 2000 and
Windows XP New Technology File System (NTFS). Introduced with Windows NT and the basis for later operating
systems, NTFS was a leap forward from FAT, offering better security and overall performance. NTFS also allowed Windows
computers to use long file names (file names longer than eight characters) for the first time.
 NTFS S. This updated version of NTFS is used in Windows 2000 and XP.
 High-Performance File System (HPFS). This was designed for use with IBM ’s OS/2.
Other operating systems (such as UNIX), and even some network operating (such as Novell NetWare), use their own file
systems. Although each file system has different features and capabilities, they all perform the same basic tasks and enable a
computer's disks and operating system to store and manage data efficiently
The Purpose of Operating Systems
An operating system (OS) is a software program, but it is different from word processing programs, spreadsheets, and all the
other software programs on your computer. the OS is an example of system software—software that controls the system's
hardware and that interacts with the user and application software. the operating system is the computer's master control
program. The OS provides you with the tools (commands) that enable you to interact with the PC. When you issue a
command, the OS translates it into code that the machine can use. The OS ensures that the results of your actions arc
displayed on screen, printed, and so on. The operating system also acts as the primary controlling mechanism for the
computer's hardware. The operating system performs the following functions:
 Displays the on-screen elements with which you interact
 Loads programs (such as word processing and spreadsheet programs) into the computer’s memory so that you can use
them.
 Coordinates how programs work with the computer's hardware and other software. Manages the way information is stored
on and retrieved from disks.
Command-Line Interfaces
Some older operating systems (such as MS-DOS) and some current versions of UNIX and Linux feature a command-line
interface, which uses typewritten commands— rather than graphical objects—to execute tasks. A command-line interface
displays in character mode—using only equal-sized alphanumeric and other simple symbols. Users interact with a command-
line interface by typing strings of characters at a prompt on the screen. In DOS, the prompt usually includes the identification
for the active disk drive (a letter followed by a colon), a backslash (\), and a greater-than symbol (>), as in C:\>. As much as
people prefer to work in a GUI, a command-line interface gives you a quick way to enter commands, and even now Windows
has an optional command-line interface, called the Command Prompt. This command prompt, however; is not DOS, and is
most often used by administrators to run non-GUI programs for managing and troubleshooting Windows. In fact, any
program that can be run in Windows can be launched from here, opening its own GUI window, if necessary.
MS-DOS commands: copy, dir, debug, cls, edit, erase.
Running Programs
The operating system provides a consistent interface between application programs and the user. It is also the interface
between those programs and other computer resources such as memory, a printer, or another program. Programmers write
computer programs with built-in instructions—called system calls—that request services from the operating system. The)' are
known as “calls’* because the program has to call on the operating system to provide some information or service. For
example, when you want your word processing program to retrieve a file, you use the Open dialog box to list the files in the
folder. To provide the list, the program calk on the operating system. The OS goes through the same process to build a list of
files whether it receives its instructions from you (via the desktop) or from an application. The difference is that when the
request comes from an application, the operating system sends the results of its work back to the application rather than to the
desktop. Some other services (hat an operating system provides to programs, in addition to listing files, include
≫ Saving the contents of files to a disk.
≫ Reading the contents of a file from disk into memory.
≫ Sending a document to the printer and activating the printer.
≫ Providing resources that let you copy or move data from one document to another, or from one program to another.
≫ Allocating RAM among the running programs.
≫ Recognizing keystrokes or mouse clicks and displaying characters or graphics on the screen.

Working with Device Drivers


In addition to using interrupts, the operating system often provides programs for working with special devices such as
printers. These programs arc called drivers because they allow the operating system and other programs to activate and use—
that is, “ drive"—the hardware device. Most new software you buy will work with your printer; monitor; and other
equipment without requiring you to install any special drivers.
Backup Utilities
For safekeeping, a backup utility can help you copy large groups of files from your hard disk to another storage medium, such
as tape or a CD-R disc. Many newer operating systems feature built-in backup utilities but feature-rich backup software is
available from other sources. These utilities not only help you transfer files to a backup medium, they help organize the files,
update backups, and restore backups to disk in case of data loss.
Antivirus
A virus is a parasitic program that can delete or scramble files or replicate itself until the host disk is full. 15 “Computing
Keynotes: Viruses," computer viruses can be transmitted in numerous ways, and users should be especially vigilant when
downloading files over the The Windows Backup utility. Internet or reusing old diskettes that may be infected. An antivirus
utility can examine the contents of a disk or RAM for hidden viruses and files that may act as hosts for virus code. Effective
antivirus products not only detect and remove viruses; they also help you recover data that has been lost because of a virus.
Firewall
A firewall is a network security device that monitors incoming and outgoing network traffic and permits, or blocks data
packets based on a set of security. Your ISP and most corporations employ specialized computers on their Internet
connections that are dedicated to examining and blocking traffic coming from and going to the Internet. Such a computer is
called a firewall, and manufacturers such as Cisco, 3COM, and others offer these products at a very high price. These
firewalls also require highly trained people to manage them. If you work in a corporation where a firewall is protecting the
corporate network, leave the firewall function to the experts. At home, however, you will want to be sure to use cither a
smaller, less-expensive hardware firewall or install a software firewall utility on any computer directly connected to the
Internet. Windows XP comes with a simple firewall that you can optionally turn on through the Properties dialog of each
network connection.
Intrusion Detection
While a firewall offers protection from predictable intrusion, intrusion(interference) detection software reveals the types of
attacks a firewall is creating logs of the attempts and (depending on how you configure it) notifying you of certain types of
intrusion attempts. In the competitive utility software field, intrusion detection is often added as a feature to firewall or
bundled Internet security programs.
Screen Savers
Screen savers are popular utilities, although they serve little purpose other than to hide what would otherwise be displayed on
the screen. A screen saver automatically appears when a keyboard or pointing device has not been used.
Types of Operating Systems
Operating systems can be organized into four major types: real-time, single* user/single-tasking, single-user/multitasking,
and multi-user/multitasking. The following sections describe each type of OS.
Real-Time Operating Systems
A real-time operating system is a very fast, relatively smalt OS. Real-time Os are often also embedded OSs, when they arc
built into the circuitry of a device and arc not loaded from a disk drive. A real-time operating system is needed to run real-
time applications; it may support multiple simultaneous tasks, or it may only support single-tasking. A real-time
application is an application that responds to certain inputs extremely quickly—thousandths or millionths of a second
(milliseconds or microseconds, respectively). Real-time applications arc needed to run medical diagnostics equipment, life-
support systems, machinery, scientific instruments, and industrial systems.
Single-User/Single-Tasking Operating Systems
An operating system that allows a single user to perform just one task at a time is a singlc-uscr/singlc*tasking operating
system. To a user, a "task" is a function such as printing a document, writing a file to disk, editing a file, or downloading a
file from a network server. To the operating system, a task is a process, and small and simple OSs can only manage a single
task at a time. MS-DOS is one example of a single-tasking OS, and the Palm OS, used on the Palm handheld computers, is
another. Although such operating systems are limited by this characteristic, there is still a use for them, because they take up
very little space on disk or in memory when they are running and do not require a powerful and expensive computer.
MULTI TASKING OPERATING SYSTEMS
Definition – Multitasking operating system provides the interface for executing the multiple program tasks by single user at a
same time on the one computer system. For example, any editing task can be performed while other programs are executing
concurrently. Other example, user can open Gmail and Power Point same time.
Single-User/Multitasking Operating Systems
A single-user/multitasking operating system is one that allows a single user to perform two or more functions at once. It takes
a special operating system to keep two or more tasks running at once. The most commonly used personal computers usually
run such OSs, including Microsoft Windows and the Macintosh Operating System (see Figure 6A.2). The multitasking features
of these OSs have greatly increased the productivity of people in a large variety of jobs because they can accomplish more in a
shorter period of time. For instance, to an office worker, it is important to be able to send a large document to a printer and be
able to do other work on his or her computer while it is being printed. It is also helpful for many types of workers to be able to
have two or more programs open, to share the data between the two programs, and to be able to instantly switch between the
two programs. A disadvantage of a single-user/multitasking operating system is the increased size and complexity it needs to
support multitasking, while keeping the related features users have come to expect, such as a graphical user interface, and the
ability to share data between two or more open programs.
Multi-User/Multitasking Operating Systems
A multi-user/multitasking operating system is an operating system that allows multiple users to use programs that are
simultaneously running on a single network server; called a terminal server. This is not at all the same as connecting to a
network server for the sake of accessing files and printers. "Networks," when a computer is connected to a server to access
document files to edit, the client computer performs the processing work locally. Nor so with a multi-user OS, which gives
each user a complete environment, called a user session, on the server. Each user’s applications run within their user session
on the server separate from all other user sessions. The software that makes this possible is called a terminal client. In a
multi-user/multitasking operating system environment, all or most of the computing occurs at the server (sec Figure 6A.3).
Examples of multi-user OSs include UNIX, VMS(Virtual Memory-based OS), and mainframe operating systems such as
MVS(Multiple Virtual Storage). The advantage of these operating systems is that they can be managed by simply making
changes to one server, rather than to many desktop computers. They also allow the user to work with applications that
require a more powerful computer than the user needs on the desktop to run the client. A disadvantage is that if the network
connection to the server is broken, the user cannot do any work in the applications on the server.
The operating system that runs more than one task at a time is known as a multitasking operating system (MOS). MOS can
be desktop or mobile operating system (OS).
Different tasks run in the OS are MS Word, MS Excel, email application, browser, media player, OS services etc. End users
use all these programs at the same time. But OS takes this concept little different.
As shown in the diagram above, three tasks are running on the computer. CPU gives 10 nanoseconds to each task. Time is
shared between task 1, task 2 and task 3. If task 1 is not completed in 10 nanoseconds, then task 1 have to wait until task 2
and task 3 are given time. Note that one processor (CPU) can only run one process (task) at a time. CPU gives small time to
each process and switch to another process. Now let me discuss some of the advantages and disadvantages of MOS.
Advantages of the multitasking operating system (MOS)
Timesharing:
Time-sharing is the main concept and benefit of MOS. All tasks are given a suitable amount of time and no waiting time
occurs for the CPU.
Handle multiple users:
Multiple users running multiple programs can be best handled by MOS. All programs run smoothly without a glitch in
performance. All users of OS are given a suitable amount of time.
Protected memory:
Memory is better managed in MOS. Unauthorized applications are not given permission to overuse memory.
Efficient virtual memory:
If any program has long waiting time due to any I/O interrupt, that program is shifted to virtual memory. After completion of
the I/O operation, the program comes back to RAM. The process of virtual memory management is nicely handled in MOS.
Programs can run in the background:
OS services and another type of background services are also given some time to run in the OS. These programs are not
visible to a normal computer user. But these programs keep running to maintain other programs run smoothly. Example of
background programs is a firewall, anti-virus protection programs etc.
Increase reliability:
OS runs smoothly in using multitasking. All type of computer users become satisfied. Either user can run a single program or
multiple programs they don’t feel any fault in using a computer.
The user can use multiple programs:
Multiple programs like MS Word, MS Excel, Photoshop, browser, games, and calculator can run at the same time. This is a
big advantage to MOS.
Best use of computer resources:
Computer resources like RAM, Processor, I/O devices, hard drive, and Gaming consoles are better managed in MOS.
Disadvantages of the multitasking operating system (MOS­)
There are some drawbacks of MOS also. Some of them are discussed below:
Limitation of memory:
When a computer user opens many programs at a time, the computer becomes slow. The reason behind is that many
programs are loaded into main memory and CPU cannot give good time for each program and response time of completing
the job becomes higher. Computers with low RAM usually face this problem. One solution to this is to increase your RAM
capacity.
Limitation of the processor:
If the processor is slow in the computer then it can process programs slow and manage multiple programs takes longer time.
Some heavy programs cannot run smoothly on the slow processor because they need more processing powers.
CPU heat up:
By doing multitasking, the processor becomes busy all the time and CPU heats up. To solve this problem you have to
attach the cooling system to your CPU. This normally happens when you play heavy games on your PC.
There are some examples of multi tasking OS like as –
Windows XP
Windows Vista
Windows 7
Windows 8
Windows 10
Windows 2000
IBM’s OS/390
Linux
UNIX
GRAPHICAL USER INTERFACES
Most current operating systems, including all versions of Windows, the Macintosh operating system, OS/2, and some versions
of UNIX and Linux, provide a graphical user interface. Graphical user interfaces are so called because you use a mouse (or
some other pointing device) to work with graphical objects such as windows, menus, icons, buttons, and other tools. These
graphical tools all represent different types of commands; the GUI enables you to issue commands to the computer by using
visual objects instead of typing commands. This is one of the key advantages of a graphical user interface; it frees you from
memorizing and typing text commands.
Graphical user interfaces would become the standard of user-centered design in software application programming, providing
users the capability to intuitively operate computers and other electronic devices through the direct manipulation of graphical
icons such as buttons, scroll bars, windows, tabs, menus, cursors, and the mouse pointing device. Many modern graphical
user interfaces feature touchscreen and voice-command interaction capabilities.
How Does a Graphical User Interface Work?
Graphical user interface design principles conform to the model–view–controller software pattern, which separates internal
representations of information from the manner in which information is presented to the user, resulting in a platform where
users are shown which functions are possible rather than requiring the input of command codes. Users interact with
information by manipulating visual widgets, which are designed to respond in accordance with the type of data they hold and
support the actions necessary to complete the user’s task.
The appearance, or “skin,” of an operating system or application software may be redesigned at will due to the nature of
graphical user interfaces being independent from application functions. Applications typically implement their own unique
graphical user interface display elements in addition to graphical user interface elements already present on the existing
operating system. A typical graphical user interface also includes standard formats for representing graphics and text, making
it possible to share data between applications running under common graphical user interface design software.
Graphical user interface testing refers to the systematic process of generating test cases in order to evaluate the functionality
of the system and its design elements.
Graphical user interface testing tools, which are either manual or automated and typically implemented by third-party
operators, are available under a variety of licenses and are supported by a variety of platforms. Popular examples include:
Tricentis Tosca, Squish GUI Tester, Unified Functional Testing (UFT), Maveryx, Appium, and eggPlant Functional.
Graphical User Interface Examples
Sketchpad, believed to be the first graphical computer-aided design program, was developed in 1962 by Ivan Sutherland
while he was at MIT, and consisted of a light pen that enabled users to create and manipulate objects in engineering
drawings in real time with coordinated graphics.
Modern operating systems and graphical user interfaces are incorporated into nearly every interactive application, such as
ATMs, self-service checkouts, airline self-ticketing and check-in, video games, smartphones, and desktops. Some
popular, modern graphical user interface examples include Microsoft Windows, macOS, Ubuntu Unity, and GNOME
Shell for desktop environments, and Android, Apple's iOS, BlackBerry OS, Windows 10 Mobile, Palm OS-WebOS, and
Firefox OS for smartphones.
Advantages of Graphical User Interfaces
The advantage of a graphical user interface is a stark improvement in useability for the average person. The features of a
graphical user interface leverage familiar metaphors, such as drag-and-drop for transferring files, and use familiar icons,
such as a trash bin for deleted files, creating an environment in which computer operations are intuitive and easily mastered
without any prior practice or knowledge of computing machinery or languages. Graphical user interface applications are self
descriptive, feedback is typically immediate, and visual cues encourage and steer discoverability.
Best Programming Language for Graphical User Interfaces
While there are several different visual programming languages with their own unique advantages for the development of a
graphical user interface design, C# or Java may be considered preferable options due to their ability to run GUIs
simultaneously in a browser and as a desktop application. Other options include Python, HTML5/Javascript, and C/C++.
Difference Between Character User Interface and Graphical User Interface
Character user interface, also known as command-line user interface or non graphical user interface, refers to the use of text
commands, managed by a command-line interpreter, in order to communicate with a computer program. Typically software
developers and system administrators rely on command-line interfaces to configure machines, manage computer files, and
access program features that are otherwise unavailable on a graphical user interface.
Character user interfaces support automation and scripting and tend to provide greater granular control and a higher level of
functionality than graphical user interfaces. While the character user interface was the primary method of operating
computers through the 1980s, most modern electronic devices are equipped with intuitive graphical user interfaces and the
average user will rarely if ever have cause to access a computer terminal.
Difference Between Web User Interface and Graphical User Interface
A web user interface, or web-based graphical user interface, refers to the interaction between a user and software running
on a web server wherein the user interface is the web browser and the web page it downloaded and rendered. Technologies
such as Flash, Java, JavaScript, and Silverlight enable interactions such as drag-and-drop, playing audio, drawing on the
screen, and access to the keyboard and mouse.
Web graphical user interfaces are platform independent, require no installation or separate software development, easy to
update and monitor due to the nature of not being dependent upon the user to deploy updates, provides a vibrant UI
experience, and are low cost, requiring only Ethernet or WiFi interface connectivity.

You might also like