0% found this document useful (0 votes)
1K views

Platform Technologies Module 1

This document provides an overview of platform technologies and operating systems. It discusses platform technologies as a base for developing other applications and processes, with features like abstraction, evolution, bundling, and interoperability. It then defines operating systems as software that manages computer hardware resources and acts as an intermediary between users and the computer. Some key features of operating systems mentioned include computer system organization, I/O structures, memory management, process management, protection and security. The document is intended to help students understand the basic concepts and features of platform technologies and operating systems.

Uploaded by

Christian Kingaw
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views

Platform Technologies Module 1

This document provides an overview of platform technologies and operating systems. It discusses platform technologies as a base for developing other applications and processes, with features like abstraction, evolution, bundling, and interoperability. It then defines operating systems as software that manages computer hardware resources and acts as an intermediary between users and the computer. Some key features of operating systems mentioned include computer system organization, I/O structures, memory management, process management, protection and security. The document is intended to help students understand the basic concepts and features of platform technologies and operating systems.

Uploaded by

Christian Kingaw
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

Republic of the Philippines

Mountain Province State Polytechnic College


Bontoc, Mountain Province

OVERVIEW OF PLATFORM TECHNOLOGY AND


OPERATING SYSTEM

Module 1 of 4

PLATFORM TECHNOLOGIES

Brueckner B. Aswigue

Information Technology Department

1st Semester, SY 2020-2021


CP #: 0947-3621-495
[email protected]
INTRODUCTION

Information Technology may be at the cutting edge of the platform revolution in


technology but as information makes its way out into the physical world through the
Internet of Things all physical technologies will be increasingly recognized as platforms
becoming designed and operated as such. The smart grid will be a platform, the smart
airport will be a platform, the smart city, car, and house will be platforms and even the
smart door handle will be a platform. This Platform Technologies or popularly known as
Operating Systems installed in the server, desktop, and mobile computing.

This module will present the different features of platform technologies such as
abstraction, evolution, bundling, and interoperability. Platform technologies is an
environment for building and running applications, systems and processes. These can
be viewed as toolsets for developing and operating customized and tailored services.
Technology platform commonly known also as Operating System.
It will expound the brief meaning and descriptions of the different features of the
Operating System such as: computer – system organization, I/O structures, direct
memory access structure, direct memory access structures, computer –system
architecture, operating system structure, operating-system operations, process,
protection and security, computing environments, and open-source operating system.

This module presents the overview of basic components and features of the
different platform technologies. It also gives an extensive overview of the different
features of Operating System. Powerpoint presentation is also included to give you more
details of the module and also as your references.

The number of hours allotted for this module shall be for 3 hours. You are
expected to finish the module in two weeks.

LEARNING OUTCOMES

At the end of the module, you should be able to:


1. distinguish accurately the meaning and features of platform technologies;
and,
2. determine comprehensively the definition and extensive overview of
Operating System.

PRE-TEST

The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without
asking others or referring to books.
Choose the best answer for each question and write the letter of your
choice after the number.

1. A base or infrastructure upon which other applications technologies or processes are


developed for the end-user.
a. JAVA
b. Platform Technologies
c. Microsoft Office
d. Facebook
2. Acts as an intermediary between the computer user and the computer hardware
a. Operating System
b. Application Software

1
c. Facebook
d. Microsoft Office
3. Read-only memory (ROM) is general term as
a. Software
b. Firmware
c. Hardware
d. Harddsik
4. Software-generated interrupt caused either by an error or a user request
a. Polling
b. Vectored
c. Disabled
d. trap
5. Large storage media that the CPU can access directly
a. Main memory
b. Secondary storage
c. Hard disk
d. USB
6. 8 bits equivalent of
a. 2 bytes
b. 1 kilobyte
c. 1 byte
d. 1 word

7. Rigid metal or glass platters covered with magnetics recording materials


a. Register
b. Magnetic disk
c. Magnetic tapes
d. Main memory
8. Logical extension in which CPU switches jobs so frequently that users can interact
with each job while it is running, creating interactive computing
a. Multitasking
b. I/O swapping
c. Virtual memory
d. Swapping

9. Mechanism for controlling access of process or users to resources defined by the OS


a. Privilege
b. Security
c. Escalation
d. Protection
10. New category of devices to manage web traffic among similar server
a. Load balancers
b. Windows XP
c. Client-server
d. Kerner
11. Input/Output method once the I/O is started, then I/O completion no intervention.
a. Device-status table
b. Synchronous Method
c. Asynchronous Method
d. System calls
12. Input/Output method once the I/O was interrupt, the I/O return control to the
user.
a. Device-status table
b. Synchronous Method
c. Asynchronous Method
d. System calls
13. Large storage media that the CPU can access directly or known as volatile storage
capacity.
a. Main Memory
b. Secondary storage
c. Third storage

2
d. Hard disk
14. Extension of storage media that provices large nonvolatile storage capacity.
a. Main Memory
b. Secondary storage
c. Third storage
d. Hard disk
15. How many bytes are there in the words “Testing lang po”.
a. 15 bytes
b. 50 bytes
c. 1500 bytes
d. 156 bytes

LESSON 1: Introduction of Platform Technologies and overview of


Operating System and its features.

Objectives:
At the end of the lesson, you should be able to:
1. identify accurately the meaning of platform technologies;
2. discuss comprehensively the degree of abstraction, bundling,
interoperability and evolution of the platform;
3. identify accurately the meaning of OS; and,
4. discuss comprehensively the OS and its features

Let’s Engage.
Platform technology is a technology that enables the creation of products and
processes and serves as the basis for many other technologies such as automobile, big
data processing technology, biotechnology, nanotechnology, grid computing and ICT
(Information and Communication Technology). It establishes the long-term capabilities
of research & development institutes. It can be defined as a structural or technological
form from which various products can emerge without the expense of a new
process/technology introduction

With the rise of information technology and the ever-increasing complexity of our
technology landscape, platforms have become the design paradigm of choice for today's
complex engineered systems. We first saw the power of the platform model in the
development of the personal computer some twenty to thirty years ago as operating
system providers built their technology as a platform for software developers to create
applications on top. But it was not until the past decade with the widespread advent of
the internet that the platform model has truly come of age as virtually every internet
company from the biggest search giants to the smallest little social media widgets has
started to define their solution as a platform.

A computer system has many resources (hardware and software), which may be
to complete a task. The commonly required resources are input/output devices,
memory, file storage space, CPU etc. The operating system acts as a manager of the
above resources and allocates them to specific programs and users, whenever necessary
to perform a particular task. Therefore operating system is the resource manager where
it can manage the resource of a computer system internally. The resources are
processor, memory, files, and I/O devices. In simple terms, an operating system is the
interface between the user and the machine.

3
An operating system is software that manages the computer hardware. The
hardware must provide appropriate mechanisms to ensure the correct operation of the
computer system and to prevent user programs from interfering with the proper
operation of the system.

Internally, operating systems vary greatly in their makeup, since they are
organized along many different lines. The design of a new operating
system is a major task. It is important that the goals of the system be well
defined before the design begins. These goals form the basis for choices
among various algorithms and strategies.

Figure 1. Cellphone installed with Platform and Application levels


(www.systemsinnovation.io)

PLATFORM TECHNOLOGIES FEATURES


A platform is a group of technologies that are used as a base or infrastructure
upon which other applications, technologies or processes are developed for the end-
user. For example, in personal computing, a platform is the basic hardware and
operating system on which software applications can be run. Although the term is most
readily identified with information technology it, of course, applies to all type of
technology. For example, a city is another good model of a platform technology, with a
core set of underlying infrastructure services that are provided for building developers
to construct modular structures in the form of buildings on. This platform allows them
to draw upon underlying services so that they do not have to reinvent them each time
and can thus develop and deploy their buildings more rapidly.

ABSTRACTION
The key to the platform technology architecture is abstraction, as all
platform technologies involve two distinctly different levels to their design with
these different levels defined according to their degree of abstraction. Abstraction
is the quality of dealing with generic forms rather than specific events, details or
applications. In this respect, abstraction means removing the application of the
technology from the underlying processes and functions that support that
application.
The platform is an abstraction, meaning that in itself it does not have
application. For example, you might rent a cloud platform from a provider but in
itself, this is absolutely no use to an end-user they cannot do anything with it.

4
Platforms are composed of generic processes that do not have specific
instantiation. The application is designed to bundle these underlying resources
and turn them into a real instance of an application that can be applied in the
real world.
In the auto industry, for example, a car platform is a shared set of common
design, engineering, and production models from which many different specific
models can be created. In this way the car companies have abstracted away from
any specific type of car to create a two-tiered system; one level being generic the
other specific to any instance of that model.
This is a central aspect of the platform model, the creation of a generic
form or set of services, on the underlying platform level, and then on the
application level these services are bundled into different configurations and
customized to the specific needs of the end-user. In such a way a finite amount
of reusable abstract building blocks can be bundled and re-bundled on the
application layer.
This use of abstraction works to remove the complex for the application
developers. By moving core services down to the platform level application
developers can simply plug into these services and build their applications on top
of it, thus working to greatly simplify the complexity they encounter. We can think
about a house as a platform, once there are common protocols, Iot platforms for
houses will be built where any device, technology or item that enters into the
house can then connect into the platform and become an application, the house
platform can then manage these applications, providing them with infrastructure
services and present them as an integrated solution to the house occupants.
This core idea of abstraction is very powerful and can be applied to our
entire technology landscape, with smaller more specific technologies sitting on
top of others that work as the platform supporting them which, in turn, may also
sit on top of others that support them. For example, smart cities will become
platforms with houses being applications that draw upon the common physical
and information resource made available - such as parking, water, electricity etc
- but also the house itself will be a platform for all of the technologies within it
delivering services to them. Each layer in the hierarchy bundles the resources
provided to it from that below and delivers those resources as a service to the
applications that sit on top of it.

BUNDLING
A platform technology has been defined as a structure or technology from
which various products can emerge without the expense of a new process
introduction. This is achieved by defining a core set of building blocks and then
configuring them into different bundles depending on the context.
Effective platform technologies should work like Lego kits, where the
platform provides the elementary building blocks that are then bundled together
on the application level to meet the specific requirements of the end-user. For
example, in enterprise architecture, there is a framework called TOGAF that
defines any organization in terms of a set of building blocks that are essential to
the workings of any enterprise.
Developing an effective platform technology requires a coherent
understanding of what the core services are and thus what those basic building
blocks that any application will need are. This is not evident, it took us many
centuries of building enterprises before we came up with a generic model for the
building blocks of any enterprise outlined in TOGAF.

5
Platform design goes hand in hand with a service-oriented architecture,
where developers of applications treat the building blocks as services that they
then simply string together in different ways to build their solutions. For example,
today a new business can be relatively quickly and easily setup - at least
compared to a few decades ago - because of the core platform of the internet and
the many different services - building blocks - that are available for entrepreneurs
to bundle together into new solutions. For example, within just a few weeks, one
could create a new service by building a web application that draws upon services
from Twitter, for user identity, Etherium for secure transactions, Alibaba for
sourcing materials, Upwork for staffing etc. The fact that you don't have to build
all of these core components yourself, you are just plugging all of them together
means that you can easily and quickly reconfigure them when needed.
At the heart of the platform model is a distinction between the basic
functionalities of the technology and how those functions are composed; a
platform level that deals with the "what" and an application level that deals with
the "how". The basic functions of the technology - the building blocks - are the
"what" and the way those capabilities are strung together is the "how". One could
think of a fab lab as a platform technology, the materials are the "what" or
building blocks that are made available for people to construct into objects
through the use of the machines. The way they use those machines to process
the building blocks into finished products is the "how".

INTEROPERABILITY
Platforms are open systems, unlike traditional technologies that are simply
designed as individual physical objects that perform a function, platforms are
designed to be interoperable with other systems, and they will likely have external
applications running on top of them all of which cannot be fully foreseen by the
developers of the platform. Think of an IoT platform for a house which will have
to interoperate and work with many devices and technologies in the house if it is
to be successful at delivering the end service.
Previously technology was developed largely "in house" with each company
creating their own proprietary systems, delivering it to the end-user, trying to
create lock-in and compete with other companies who were also creating their
own systems. The industrial model of technology development was typically one
of high capital costs, long design and production cycles within closed
organizations creating proprietary technologies with limited interoperability
between the technologies of companies in the same industry.
Much of this industrial model was a product of the physical nature of the
technologies being developed which made them excludable and rivalrous in
nature. But the dematerialized nature of information technology makes it vastly
easier to duplicate and exchange information services and this has a very
profound impact on how we design and build the service systems of today. This
dematerialized nature makes them non-excludable and non-rivalrous which
creates a very different dynamic. The result is one of increasing cooperation as
most of the value is no longer inside of the organization or technology but
increasingly outside of it; the value is increasingly in the system's capacity to
interoperate with other systems.
A smartphone would not be very valuable if it could not connect to the
internet or run other people's applications on it. Most of the value that the end-
user gets from their smartphone is not created by the original technology
developer, but instead by other people connecting into that platform and building
things on it, or that system connecting to other systems - such as to web pages -
with the user getting value from that interoperability and connectivity instead of
so much the device itself. In such a way interoperability becomes key and the

6
platform model is designed to optimize for this fluid connectivity between the core
technology and other systems built on it or external to it.
With information technology, we can build once and deploy many times,
almost anywhere at very little cost per extra unit. Facebook can build their
software platform once and through the internet, it can be accessed and used as
a service at extremely little extra cost, per person, to them. With information
technology, the marginal cost of the extra user often goes to almost zero.
In the stand-alone, individual technology paradigm of the industrial
economy the same or similar technology was developed by many different
companies and then they competed for market share. But with service systems a
new model is emerging, that of platform technologies, where various modular
services are made available that can be then plugged into platforms to be
delivered as an integrated experience to the end-user. Interoperability and
collaboration between systems are the key ingredients, and platforms facilitate
this by allowing different technologies to plug into each other and draw upon
their services in a seamless fashion.
A platform technology architecture - being open - is optimized for user
generated systems. A suit of new technologies from solar cells to distributed
manufacturing and virtually all forms of information technology are driving a
revolution in user-generated systems, as capabilities get pushed out to the edges
of networks. In a previous age of centralization when productive capabilities were
centered within large organizations and required large amounts of fixed capital,
then closed organizations made sense.
But today with the rise of the prosumer it is becoming critical that
organizations, business processes and all kinds of technology are able to harness
the input of the end-user if they are to continue to thrive in this world of
distributed technologies. The most successful technology providers of tomorrow
will be those who are able to harness this mass of new capacity and capabilities
on the long tail by providing them with the tools, know-how, methods and
connectivity to participate, and the platform model is idea for this. Sell a man a
fish and you make a bit of money and he feeds himself for a day, give a man a
fishing rod and he can feed himself and you can sell him on-going services for his
fishing rod, also he can help you in developing new innovations for your fishing
rod.

EVOLUTION
The world is speeding up, as the pace of change gets ever faster the
requirement to be able to meet that fast paced change will become ever more
important. Adaptive capacity and agility are, and will increasingly be seen as a
key requirement, if not the key requirement, in the coming decades. In stable and
predictable environments technologies can be built as homogeneous systems
without the capacity to change, enabling them to be optimized for efficiency
within one environment. But as that environment and the pace of innovation
changes, faster and faster, this homogenous architecture appears less viable and
there is a need to switch to a platform model to enable fast-paced innovation at
the edges of networks.
The goal of what we are trying to achieve may stay the same, people will
always want food, clothing, housing, entertainment etc. but in fast paced
environments the context stays changing and thus how we achieve that goal will
change, making it important to be able to bundle and unbundle building blocks
in new ways, quickly and easily.
Platform technologies can be every effective at enabling an evolutionary
process of technology development. The fact that they separate the current

7
application from generic processes means that as demands change old
applications can be retired and new more relevant technologies can be built
quickly.
Applications are instances of a technology, an instance is a specific
configuration of a technology for a specific application. For example, a
wheelbarrow is an instance of a technology, it has a specific shape, size, and form
which cannot be easily reconfigured. Any technology that is an application or
instance will go through a linear lifecycle because it does not have the capacity
to reconfigure or regenerate itself.
Traditionally our technologies go through a linear lifecycle from cradle to
grave. For example, imagine building a bicycle, the first time you build it you just
build a whole bicycle from start to finish. Then you have to build another one but
this time for a child, the fact that it has different requirements means you have
to start again from scratch.
While Facebook focused on creating a robust platform that allowed
outside developers to build new applications, Myspace did everything itself.
`We tried to create every feature in the world and said, O.K., we can do it,
why should we let a third party do it?' says MySpace cofounder DeWolfe.
`We should have picked 5 to 10 key features that we totally focused on and
let other people innovate on everything else' – Business Week: The Rise &
Ignominius Fall of MySpace
Every time that you have to build a new bicycle with new requires you start
to get a better idea of what in the development of this technology is a core feature
that remains unchanged and what changes with each different end-user
requirement. If you go on developing bicycles long enough you will eventually
start to see emerging a core platform that remains unchanged and an application
level that changes. When we combine this feature of abstraction to platforms with
user-generated solutions we start to get the potential for a truly evolutionary
development process.
By building a flexible modular structure where components can be easily
bundled and re-bundled, and by putting this closer to the end-user, it allows for
much faster feedback and iteration in the development process. With respect to
agility and adaptability, the key design innovation of platform technologies is in
separating what is permanent from what is contingent and temporal, so as to
build a stable core that can support the rapid reconfiguration of applications
depending on the context and thus make the system agile.
The platform model of reusable building blocks and bundling allows also
for rapid innovation on the application level. The more solid the building blocks
and the lighter the links between them the easier it is to take them apart, change
them around and experiment. The bundling model to platforms offers the
possibility of developing more sustainable technology solutions in that it focuses
our attention on the reusability of systems components. In bundling we can
design to create a light, loosely coupled set of connections - a network - between
the parts during their composition that can be easily dismantled giving us once
again a set of building blocks that can be potentially endlessly bundled into new
solutions.
In such a model we are essentially separating the organizational structure
from the component parts so as to avoid the linear life cycle of a homogeneous
system by being able to unbundle them quickly and easily. In a more traditional
homogenous system, the organization's structure and building blocks are more
or less one. For example, think about a house made out of blocks and mortar for
connecting them, the house will go through a linear lifecycle from cradle to grave

8
where it will be knocked down because it is difficult to separate the building
blocks from how they were composed so as to recompose them. Homogeneous
systems may well be the cheapest option and optimized for efficiency but as
adaptive capacity, resilience and sustainability increase in importance, a
modular platform model may become what greatly more effective in this respect.

OPERATING SYSTEM – The overview


An operating system is a program that manages the computer hardware. It also
provides a basis for application programs and acts as an intermediary between the
computer user and the computer hardware. An amazing aspect of operating systems
is how varied they are in accomplishing these tasks. Mainframe operating systems are
designed primarily to optimize utilization of hardware. Personal computer (PC)
operating systems support complex games, business applications, and everything in
between. Operating systems for handheld computers are designed to provide an
environment in which a user can easily interface with the computer to execute
programs. Thus, some operating systems are designed to be convenient, others to be
efficient, and others some combination of the two.

To truly understand what operating systems are, we must first understand how
they have developed. In this chapter, after offering a general description of what
operating systems do, we trace the development of operating systems from the first
hands-on systems through multiprogrammed and time-shared systems to PCs and
handheld computers. We also discuss operating system variations, such as parallel,
real-time, and embedded systems. As we move through the various stages, we see how
the components of operating systems evolved as natural solutions to problems in early
computer systems.

What Operating Systems Do

We begin our discussions by looking at the operating system's role in the overall

Figure 2. Components: the hardware, the operating system, the application


programs, and the users (figure 2 until 20 – Operating System Concepts
– Silberschatz, Abraham, et.al 9th Edition).

9
The hardware—the central processing unit (CPU), the memory, and the
input/output (I/O) devices—provides the basic computing resources for the
system. The application programs—such as word processors, spreadsheets,
compilers, and web browsers—define the ways in which these resources are used
to solve users' computing problems. The operating system controls and
coordinates the use of the hardware among the various application programs for
the various users.

We can also look at a computer system as consisting of hardware,


software, and data.

The operating system provides the means for proper use of these resources
in the operation of the computer system. An operating system is similar to a
government. Like a government, it performs no useful function by itself. It simply
provides an environment within which other programs can do useful work.

To understand more fully the operating systems role, we next explore


operating systems from two viewpoints: that of the user and that of the system.

User View
The user's view of the computer varies according to the interface being
used. Most computer users sit in front of a PC, consisting of a monitor, keyboard,
mouse, and system unit. Such a system is designed for one user to monopolize
its resources.

The goal is to maximize the work (or play) that the user is performing. In
this case, the operating system is designed mostly for ease of use, with some
attention paid to performance and none paid to resource utilization—how
various hardware and software resources are shared. Performance is, of course,
important to the user; but the demands placed on the system by a single user
are too low to make resource utilization an issue. In some cases, a user sits at a
terminal connected to a mainframe or minicomputer. Other users are accessing
the same computer through other terminals. These users share resources and
may exchange information.

The operating system in such cases is designed to maximize resource


utilization—to assure that all available CPU time, memory, and I/O are used
efficiently and that no individual user takes more than her fair share.

In still other cases, users sit at workstations connected to networks of


other workstations and servers. These users have dedicated resources at their
disposal, but they also share resources such as networking and servers—file,
compute and print servers. Therefore, their operating system is designed to
compromise between individual usability and resource utilization.

Recently, many varieties of handheld computers have come into fashion.


These devices are mostly standalone units used singly by individual users. Some
are connected to networks, either directly by wire or (more often) through wireless
modems. Because of power and interface limitations, they perform relatively few
remote operations. Their operating systems are designed mostly for individual
usability, but performance per amount of battery life is important as well.

Some computers have little or no user view. For example, embedded


computers in home devices and automobiles may have numeric keypads and may
turn indicator lights on or off to show status, but mostly they and their operating
systems are designed to run without user intervention.

System View
From the computer's point of view, the operating system is the program
most intimately involved with the hardware. In this context, we can view an
operating system as a resource allocator. A computer system has many

10
resources—hardware and software—that may be required to solve a problem:
CPU time, memory space, file-storage space, I/O devices, and so on. The
operating system acts as the manager of these resources. Facing numerous and
possibly conflicting requests for resources, the operating system must decide how
to allocate them to specific programs and users so that it can operate the
computer system efficiently and fairly. As we have seen, resource allocation is
especially important where many users access the same mainframe or
minicomputer.

A slightly different view of an operating system emphasizes the need to


control the various I/O devices and user programs. An operating system is a
control program. A control program manages the execution of user programs to
prevent errors and improper use of the computer. It is especially concerned with
the operation and control of I/O devices.

Defining Operating Systems


We have looked at the operating system's role from the views of the user
and of the system. How, though, can we define what an operating system is? In
general, we have no completely adequate definition of an operating system.
Operating systems exist because they offer a reasonable way to solve the problem
of creating a usable computing system. The fundamental goal of computer
systems is to execute user programs and to make solving user problems easier.
Toward this goal, computer hardware is constructed. Since bare hardware alone
is not particularly easy to use, application programs are developed. These
programs require certain common operations, such as those controlling the I/O
devices. The common functions of controlling and allocating resources are then
brought together into one piece of software: the operating system.

In addition, we have no universally accepted definition of what is part of


the operating system. A simple viewpoint is that it includes everything a vendor
ships when you order "the operating system." The features included, however,
vary greatly across systems. Some systems take up less than 1 megabyte of space
and lack even a full-screen editor, whereas others require gigabytes of space and
are entirely based on graphical windowing systems. (A kilobyte, or KB, is 1,024
bytes; a megabyte, or MB, is 1, 0242bytes; and a gigabyte, or GB, is 1, 0243
bytes. Computer manufacturers often round off these numbers and say that a
megabyte is 1 million bytes and a gigabyte is 1 billion bytes.)

A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
being systems programs and application programs.

This last definition is the one that we generally follow. The matter of what
constitutes an operating system has become increasingly important. In 1998, the
United States Department of Justice filed suit against Microsoft, in essence
claiming that Microsoft included too much functionality in its operating systems
and thus prevented application vendors from competing. For example, a web
browser was an integral part of the operating system. As a result, Microsoft was
found guilty of using its operating system monopoly to limit competition.

System Goals
It is easier to define an operating system by what it does than by what it
is, but even this can be tricky. The primary goal of some operating systems is
convenience for the user. Operating systems exist because computing with them
is supposedly easier than computing without them. As we have seen, this view is
particularly clear when you look at operating systems for small PCs. The primary
goal of other operating systems is efficient operation of the computer system. This
is the case for large, shared, multiuser systems. These systems are expensive, so
it is desirable to make them as efficient as possible. These two goals—convenience
and efficiency—are sometimes contradictory. In the past, efficiency was often

11
more important than convenience (Section 1.2.1). Thus, much of operating-
system theory concentrates on optimal use of computing resources.

Operating systems have also evolved over time in ways that have affected
system goals. For example, UNIX started with a keyboard and printer as its
interface, limiting its convenience for users. Over time, hardware changed, and
UNIX was ported to new hardware with more user-friendly interfaces. Many
graphic user interfaces (GUIs) were added, allowing UNIX to be more convenient
to use while still concentrating on efficiency.

Designing any operating system is a complex task. Designers face many


tradeoffs, and many people are involved not only in bringing the operating system
to fruition but also in constantly revising and updating it. How well any given
operating system meets its design goals is open to debate and involves subjective
judgments on the part of different users.

To examine more closely what operating systems are and what they do, we
next consider how they have developed over the past 50 years. By tracing that
evolution, we can identify the common elements of operating systems and see
how and why these systems have developed as they have.

Operating systems and computer architecture have influenced each other


a great deal. To facilitate the use of the hardware, researchers developed
operating systems. Users of the operating systems then proposed changes in
hardware design to simplify them. In this short historical review, notice how
identification of operating system problems led to the introduction of new
hardware features.

What is an operating system? Hard to define precisely, because operating


systems arose historically as people needed to solve problems associated with
using computers.

Much of operating system history driven by relative cost factors of


hardware and people. Hardware started out fantastically expensive relative to
people and the relative cost has been decreasing ever since. Relative costs drive
the goals of the operating system.

o In the beginning: Expensive Hardware, Cheap People


▪ Goal: maximize hardware utilization.
o Now: Cheap Hardware, Expensive People
▪ Goal: make it easy for people to use computer.

In the early days of computer use, computers were huge machines that
are expensive to buy, run and maintain. Computer used in single user, interactive
mode. Programmers interact with the machine at a very low level - flick console
switches, dump cards into card reader, etc. The interface is basically the raw
hardware.

o Problem: Code to manipulate external I/O devices. Is very complex,


and is a major source of programming difficulty.
o Solution: Build a subroutine library (device drivers) to manage the
interaction with the I/O devices. The library is loaded into the top of memory
and stays there. This is the first example of something that would grow into an
operating system.

Because the machine is so expensive, it is important to keep it busy.


• Problem: computer idles while programmer sets things up. Poor utilization
of huge investment.
• Solution: Hire a specialized person to do setup. Faster than programmer,
but still a lot slower than the machine.

12
• Solution: Build a batch monitor. Store jobs on a disk (spooling), have
computer read them in one at a time and execute them. Big change in computer
usage: debugging now done offline from print outs and memory dumps. No
more instant feedback.

• Problem: At any given time, job is actively using either the CPU or an I/O
device, and the rest of the machine is idle and therefore unutilized.
• Solution: Allow the job to overlap computation and I/O. Buffering and
interrupt handling added to subroutine library.

• Problem: one job can't keep both CPU and I/O devices busy. (Have
compute-bound jobs that tend to use only the CPU and I/O-bound jobs that
tend to use only the I/O devices.) Get poor utilization either of CPU or I/O
devices.
• Solution: multiprogramming - several jobs share system. Dynamically
switch from one job to another when the running job does I/O. Big issue:
protection. Don't want one job to affect the results of another. Memory
protection and relocation added to hardware, OS must manage new hardware
functionality. OS starts to become a significant software system. OS also starts
to take up significant resources on its own.

• Phase shift: Computers become much cheaper. People costs become


significant.
• Issue: It becomes important to make computers easier to use and to improve
the productivity of the people. One big productivity sink: having to wait for
batch output (but is this really true?). So, it is important to run interactively.
But computers are still so expensive that you can't buy one for every person.
Solution: interactive timesharing.

• Problem: Old batch schedulers were designed to run a job for as long as
it was utilizing the CPU effectively (in practice, until it tried to do some I/O).
But now, people need reasonable response time from the computer.
• Solution: Preemptive scheduling.

• Problem: People need to have their data and programs around while they
use the computer.
• Solution: Add file systems for quick access to data. Computer becomes a
repository for data, and people don't have to use card decks or tapes to store
their data.

• Problem: The boss logs in and gets terrible response time because the
machine is overloaded.
• Solution: Prioritized scheduling. The boss gets more of the machine than
the peons. But, CPU scheduling is just an example of resource allocation
problems. The timeshared machine was full of limited resources (CPU time,
disk space, physical memory space, etc.) and it became the responsibility of
the OS to mediate the allocation of the resources. So, developed things like
disk and physical memory quotas, etc.

Overall, time sharing was a success. However, it was a limited success. In


practical terms, every timeshared computer became overloaded and the response time
dropped to annoying or unacceptable levels. Hard-core hackers compensated by
working at night, and we developed a generation of pasty-looking, unhealthy insomniacs
addicted to caffeine.

Computers become even cheaper. It becomes practical to give one computer to


each user. Initial cost is very important in market. Minimal hardware (no networking or
hard disk, very slow microprocessors and almost no memory) shipped with minimal OS
(MS-DOS). Protection, security less of an issue. OS resource consumption becomes a
big issue (computer only has 640K of memory). OS back to a shared subroutine library.

13
Hardware becomes cheaper and users more sophisticated. People need to share
data and information with other people. Computers become more information transfer,
manipulation and storage devices rather than machines that perform arithmetic
operations. Networking becomes very important, and as sharing becomes an important
part of the experience so does security. Operating systems become more sophisticated.
Start putting back features present in the old time sharing systems (OS/2, Windows
NT, even Unix).

Rise of network. Internet is a huge popular phenomenon and drives new ways of
thinking about computing. Operating system is no longer interface to the lower level
machine - people structure systems to contain layers of middleware. So, a Java API or
something similar may be the primary thing people need, not a set of system calls. In
fact, what the operating system is may become irrelevant as long as it supports the right
set of middleware.

Network computer. Concept of a box that gets all of its resources over the
network. No local file system, just network interfaces to acquire all outside data. So have
a slimmer version of OS.

In the future, computers will become physically small and portable. Operating
systems will have to deal with issues like disconnected operation and mobility. People
will also start using information with a psuedo-real time component like voice and video.
Operating systems will have to adjust to deliver acceptable performance for these new
forms of data.

What does a modern operating system do?

Provides Abstractions Hardware has low-level physical resources with


complicated, idiosyncratic interfaces. OS provides abstractions that present
clean interfaces. Goal: make computer easier to use. Examples: Processes,
Unbounded Memory, Files, Synchronization and Communication Mechanisms.

Provides Standard Interface Goal: portability. Unix runs on many very


different computer systems. To a first approximation can port programs across
systems with little effort.

Mediates Resource Usage Goal: allow multiple users to share resources


fairly, efficiently, safely and securely. Examples:
o Multiple processes share one processor. (preemptable resource)
o Multiple programs share one physical memory (preemptable
resource).
o Multiple users and files share one disk. (non-preemptable resource)
o Multiple programs share a given amount of disk and network
bandwidth (preemptable resource).

Consumes Resources Solaris takes up about 8Mbytes physical memory.

Abstractions often work well - for example, timesharing, virtual memory


and hierarchical and networked file systems. But, may break down if stressed.
Timesharing gives poor performance if too many users run compute-intensive
jobs. Virtual memory breaks down if working set is too large (thrashing), or if
there are too many large processes (machine runs out of swap space).
Abstractions often fail for performance reasons.

Abstractions also fail because they prevent programmer from controlling


machine at desired level. Example: database systems often want to control
movement of information between disk and physical memory, and the paging
system can get in the way. More recently, existing OS schedulers fail to
adequately support multimedia and parallel processing needs, causing poor
performance.

14
Concurrency and asynchrony make operating systems very complicated
pieces of software. Operating systems are fundamentally non-deterministic and
event driven. Can be difficult to construct (hundreds of person-years of effort)
and impossible to completely debug. Examples of concurrency and asynchrony:
o I/O devices run concurrently with CPU, interrupting CPU when done.
o On a multiprocessor multiple user processes execute in parallel.
o Multiple workstations execute concurrently and communicate by
sending messages over a network. Protocol processing takes place
asynchronously.

Operating systems are so large no one person understands whole system.


Outlives any of its original builders.

The major problem facing computer science today is how to build large,
reliable software systems. Operating systems are one of very few examples of
existing large software systems, and by studying operating systems we may learn
lessons applicable to the construction of larger systems.

FEATURES OF OPERATING SYSTEM


Computer-System Organization
Before we can explore the details of how computer systems operate, we
need a general knowledge of the structure of a computer system. In this section,
we look at several parts of this structure to round out our background knowledge.

Computer-System Operation
A modern general-purpose computer system consists of one or more CPUs
and a number of device controllers connected through a common bus that
provides access to shared memory. Each device controller is in charge of a specific
type of device (for example, disk drives, audio devices, and video displays). The
CPU and the device controllers can execute concurrently, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller is
provided whose function is to synchronize access to the memory.

For a computer to start running-for instance, when it is powered up or


rebooted-it needs to have an initial program to run. This initial program, or bootstrap
program, tends to be simple. Typically, it is stored in read-only memory (ROM) or
electrically erasable programmable read-only memory (EEPROM), known by the
general term firmware, within the computer hardware. It initializes all aspects of the
system, from CPU registers to device

• One or more CPUs, device controllers connect through common bus


providing access to shared memory
• Concurrent execution of CPUs and devices competing for memory cycles
• Computer-System Operation

Figure 3. A modern computer system.

15
• I/O devices and the CPU can execute concurrently.
• Each device controller is in charge of a particular device type.
• Each device controller has a local buffer.
• CPU moves data from/to main memory to/from local buffers
• I/O is from the device to local buffer of controller.
• Device controller informs CPU that it has finished its operation by
causing an interrupt.

Common Functions of Interrupts


• Interrupt transfers control to the interrupt service routine generally,
through the interrupt vector, which contains the addresses of all the service
routines.
• Interrupt architecture must save the address of the interrupted
instruction.
• Incoming interrupts are disabled while another interrupt is being
processed to prevent a lost interrupt.
• A trap is a software-generated interrupt caused either by an error or a
user request.
• An operating system is interrupt driven.

Interrupt Handling
• The operating system preserves the state of the CPU by storing registers
and the program counter.
• Determines which type of interrupt has occurred:
o polling
o vectored interrupt system
• Separate segments of code determine what action should be taken for
each type of interrupt

Figure 4. Interrupt Timeline

I/O Structure
• After I/O starts, control returns to user program only upon I/O
completion.
o Wait instruction idles the CPU until the next interrupt
o Wait loop (contention for memory access).
o At most one I/O request is outstanding at a time, no simultaneous
I/O processing.
• After I/O starts, control returns to user program without waiting for I/O
completion.
o System call – request to the operating system to allow user to wait
for I/O completion.
o Device-status table contains entry for each I/O device indicating
its type, address, and state.
o Operating system indexes into I/O device table to determine device
status and to modify table entry to include interrupt.

16
Two I/O Methods

Figure 5. (a) Synchronous Method while the (b) Asynchronous Method.

This situation will occur, in general, as the result of a user process requesting
I/O.
Once the I/O is started, two courses of action are possible. In the simplest
case, the I/O is started; then, at I/O completion, control is returned to the user
process. This case is known as synchronous I/O. The other possibility, called
asynchronous I/O, returns control to the user program without waiting for the
I/O to complete. Figure 5 shows how it occurs the I/O then can continue while
other system operations.

Whichever approach is used, waiting for I/O completion is accomplished


in one of two ways. Some computers have a special wait instruction that idles the
CPU until the next interrupt. Machines that do not have such an instruction may
have a wait

loop:
Loop: jmp Loop

This tight loop simply continues until an interrupt occurs, transferring control to
another part of the operating system. Such a loop might also need to poll any I/O
devices that do not support the interrupt structure but that instead simply set a
flag in one of their registers and expect the operating system to notice that flag.

A better alternative is to start the I/O and then continue processing other
operating system or user program code—the asynchronous approach. A system
call is then needed to allow the user program to wait for I/O completion, if
desired. If no user programs are ready to run, and the operating system has no
other work to do, we still require the wait instruction or idle loop, as before. We
also need to be able to keep track of many I/O requests at the same time. For
this purpose, the operating system uses a table containing an entry for each I/O
device: the device-status table as shows in Figure 6. Each table entry indicates
the device's type, address, and state (not functioning, idle, or busy). If the device
is busy with a request, the type of request and other parameters will be stored in
the table entry for that device. Since it is possible for other processes to issue
requests to the same device, the operating system will also maintain a wait
queue—a list of waiting requests—for each I/O device.

An I/O device interrupts when it needs service. When an interrupt occurs,


the operating system first determines which I/O device caused the interrupt. It
then indexes into the I/O device table to determine the status of that device and
modifies the table entry to reflect the occurrence of the interrupt. For most
devices, an interrupt signals completion of an I/O request. If there are additional

17
requests waiting in the queue for this device, the operating system starts
processing the next request.

Finally, control is returned from the I/O interrupt. If a process was waiting
for thisrequest to complete (as recorded in the device-status table), we can now
return control to it. Otherwise, we return to whatever we were doing before the
I/O interrupt: to the execution of the user program or to the wait loop. In a time-
sharing system, the operating system could switch to another ready-to-run
process.

Figure 6. Device – Status Table

Direct Memory Access Structure


o Used for high-speed I/O devices able to transmit information at close
to memory speeds.
o Device controller transfers blocks of data from buffer storage directly
to main memory without CPU intervention.
o Only one interrupt is generated per block, rather than the one
interrupt per byte.

Storage Structure
o Main memory – only large storage media that the CPU can access
directly.
o Secondary storage – extension of main memory that provides large
nonvolatile storage capacity.

Storage Definitions and Notation Review


The basic unit of computer storage is the bit. A bit can contain one of two
values, 0 and 1. All other storage in a computer is based on collections of bits.
Given enough bits, it is amazing how many things a computer can represent:
numbers, letters, images, movies, sounds, documents, and programs, to name a
few. A byte is 8 bits, and on most computers it is the smallest convenient chunk
of storage. For example, most computers don’t have an instruction to move a bit
but do have one to move a byte. A less common term is word, which is a given
computer architecture’s native unit of data. A word is made up of one or more
bytes. For example, a computer that has 64-bit registers and 64-bit memory
addressing typically has 64-bit (8-byte) words. A computer executes many
operations in its native word size rather than a byte at a time.
Computer storage, along with most computer throughput, is generally
measured and manipulated in bytes and collections of bytes.
▪ 1,0 binary is 1 bit
▪ 8 bits is 1 byte/1character,
▪ A kilobyte, or KB, is 1,024 bytes or approx. 1,000 bytes = 1KB

18
▪ a megabyte, or MB, is 1,0242 bytes or approx. 1,000KB = 1MB
▪ a gigabyte, or GB, is 1,0243 bytes or approx. 1,000MB = 1GB
▪ a terabyte, or TB, is 1,0244 bytes or approx. 1,000GB = 1TB
▪ a petabyte, or PB, is 1,0245 bytes or approx. 1,000TB = 1 PB

Computer manufacturers often round off these numbers and say that a
megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking
measurements are an exception to this general rule; they are given in bits
(because networks move data a bit at a time).
• Magnetic disks – rigid metal or glass platters covered with magnetic recording
material
o Disk surface is logically divided into tracks, which are subdivided into
sectors.
o The disk controller determines the logical interaction between the device
and the computer.

• Each storage system provides the basic functions of storing a datum and of
holding that datum until it is retrieved at a later time. The main differences among
the various storage systems lie in speed, cost, size, and volatility.

• Figure 7 shows the wide variety of storage systems in a computer system can be
organized in a hierarchy according to speed and cost. The higher levels are expensive,
but they are fast. As we move down the hierarchy, the cost per bit generally
decreases, whereas the access time generally increases. This trade-off is reasonable;
if a given storage system were both faster and less expensive than another—other
properties being the same—then there would be no reason to use the slower, more
expensive memory. In fact, many early storage devices, including paper tape and
core memories, are relegated to museums now that magnetic tape and semiconductor
memory have become faster and cheaper. The top four levels of memory in Figure 7
may be constructed using semiconductor memory.

Figure 7. Storage – Device Hierarchy

In addition to differing in speed and cost, the various storage systems are
either volatile or nonvolatile. As mentioned earlier, volatile storage loses its contents
when the power to the device is removed. In the absence of expensive battery and
generator backup systems, data must be written to nonvolatile storage for
safekeeping. In the hierarchy shown in Figure 7, the storage systems above the

19
electronic disk are volatile, whereas those below are nonvolatile. An electronic disk
can be designed to be either volatile or nonvolatile.

During normal operation, the electronic disk stores data in a large DRAM
array, which is volatile. But many electronic-disk devices contain a hidden magnetic
hard disk and a battery for backup power. If external power is interrupted, the
electronic-disk controller copies the data from RAM to the magnetic disk. When
external power is restored, the controller copies the data back into the RAM.

Another form of electronic disk is flash memory, which is popular in cameras


and personal digital assistants (PDAs), in robots, and increasingly as removable
storage on general-purpose computers. Flash memory is slower than DRAM but
needs no power to retain its contents. Another form of nonvolatile storage is NVRAM,
which is DRAM with battery backup power. This memory can be as fast as DRAM
but has a limited duration in which it is nonvolatile.

The design of a complete memory system must balance all the factors just discussed:
It must use only as much expensive memory as necessary while providing as much
inexpensive, nonvolatile memory as possible. Caches can be installed to improve
performance where a large access-time or transfer-rate disparity exists between two
components.

Caching
• Caching – copying information into faster storage system; main memory
can be viewed as a last cache for secondary storage.
• Important principle, performed at many levels in a computer (in
hardware, operating system, software)
• Information in use copied from slower to faster storage temporarily
• Faster storage (cache) checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
• Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy

Direct Memory Access Structure


• Used for high-speed I/O devices able to transmit information at close to
memory speeds
• Device controller transfers blocks of data from buffer storage directly to
main memory without CPU intervention
• Only one interrupt is generated per block, rather than the one interrupt
per byte
• This form of interrupt-driven I/O is fine for moving small amounts of
data but can produce high overhead when used for bulk data movement
such as disk I/O. To solve this problem, direct memory access (DMA) is
used. After setting up buffers, pointers, and counters for the I/O device,
the device controller transfers an entire block of data directly to or from
its own buffer storage to memory, with no intervention by the CPU. Only
one interrupt is generated per block, to tell the device driver that the
operation has completed, rather than the one interrupt per byte
generated for low-speed devices. While the device controller is performing
these operations, the CPU is available to accomplish other work.
• Some high-end systems use switch rather than bus architecture. On
these systems, multiple components can talk to other components
concurrently, rather than competing for cycles on a shared bus. In this
case, DMA is even more effective. Figure 8 shows the interplay of all
components of a computer system.

How a Modern Computer Works

20
Figure 8. A von Neumann architecture

Computer-System Architecture
▪ Most systems use a single general-purpose processor
• Most systems have special-purpose processors as well

▪ Multiprocessors systems growing in use and importance


• Also known as parallel systems, tightly-coupled systems

Advantages include:
▪ Increased throughput. By increasing the number of processors, we expect to
get more work done in less time. The speed-up ratio with N processors is not
N, however; rather, it is less than N. When multiple processors cooperate on
a task, a certain amount of overhead is incurred in keeping all the parts
working correctly. This overhead, plus contention for shared resources, lowers
the expected gain from additional processors. Similarly, N programmers
working closely together do not produce N times the amount of work a single
programmer would produce.

▪ Economy of scale. Multiprocessor systems can cost less than equivalent


multiple single-processor systems, because they can share peripherals, mass
storage, and power supplies. If several programs operate on the same set of
data, it is cheaper to store those data on one disk and to have all the
processors share them than to have many computers with local disks and
many copies of the data.

▪ Increased reliability. If functions can be distributed properly among several


processors, then the failure of one processor will not halt the system, only
slow it down. If we have ten processors and one fails, then each of the
remaining nine processors can pick up a share of the work of the failed
processor. Thus, the entire system runs only 10 percent slower, rather than
failing altogether.

o Increased reliability of a computer system is crucial in many applications. The ability


to continue providing service proportional to the level of surviving hardware is called
graceful degradation. Some systems go beyond graceful degradation and are called
fault tolerant, because they can suffer a failure of any single component and still
continue operation. Fault tolerance requires a mechanism to allow the failure to be
detected, diagnosed, and, if possible, corrected.

• Two types:
o Asymmetric Multiprocessing – each processor is assigned a
specie task.
o Symmetric Multiprocessing – each processor performs all
tasks

21
Figure 9. Symmetric Multiprocessing Architecture

The difference between symmetric and asymmetric multiprocessing may result


from either hardware or software. Special hardware can differentiate the multiple
processors, or the software can be written to allow only one boss and multiple workers.
For instance, Sun Microsystems’ operating system SunOS Version 4 provided
asymmetric multiprocessing, whereas Version 5 (Solaris) is symmetric on the same
hardware.

A Dual-Core Design
A recent trend in CPU design is to include multiple computing cores on a single
chip. Such multiprocessor systems are termed multicore. They can be more efficient
than multiple chips with single cores because on-chip communication is faster than
between-chip communication. In addition, one chip with multiple cores uses
significantly less power than multiple single-core chips.

It is important to note that while multicore systems are multiprocessor systems,


not all multiprocessor systems are multicore. In our coverage of multiprocessor systems
throughout this text, unless we state otherwise, we generally use the more contemporary
term multicore, which excludes some multiprocessor systems. The Systems containing
all chips and chassis containing multiple separate systems.

In Figure 10 shows a dual-core design with two cores on the same chip. In this
design, each core has its own register set as well as its own local cache. Other designs
might use a shared cache or a combination of local and shared caches. Aside from
architectural considerations, such as cache, memory, and bus contention, these
multicore CPUs appear to the operating system as N standard processors. This
characteristic puts pressure on operating system designers—and application
programmers—to make use of those processing cores.

Finally, blade servers are a relatively recent development in which multiple


processor boards, I/O boards, and networking boards are placed in the same chassis.
The difference between these and traditional multiprocessor systems is that each blade-
processor board boots independently and runs its own operating system. Some blade-
server boards are multiprocessor as well, which blurs the lines between types of
computers. In essence, these servers consist of multiple independent multiprocessor
systems.

22
Figure 10. A dual-core design with two cores placed on the same chip.

Clustered Systems
• Like multiprocessor systems, but multiple systems working
together
o Usually sharing storage via a storage-area network (SAN)
o Provides a high-availability service which survives failures
▪ Asymmetric clustering has one machine in hot-
standby mode
▪ Symmetric clustering has multiple nodes running
applications, monitoring each other
o Some clusters are for high-performance computing (HPC)
▪ Applications must be written to use parallelization
o Some have distributed lock manager (DLM) to avoid
conflicting operations

Figure 11. General Structure of a Clustered Systems

Cluster technology is changing rapidly. Some cluster products support


dozens of systems in a cluster, as well as clustered nodes that are separated by
miles. Many of these improvements are made possible by storage-area networks
(SANs), which allow many systems to attach to a pool of storage. If the
applications and their data are stored on the SAN, then the cluster software can
assign the application to run on any host that is attached to the SAN. If the host
fails, then any other host can take over. In a database cluster, dozens of hosts
can share the same database, greatly increasing performance and reliability.
Figure 11 depicts the general structure of a clustered system.

Operating System Structure


• Multiprogramming (Batch system) needed for efficiency
• Single user cannot keep CPU and I/O devices busy at all times

23
• Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
• A subset of total jobs in system is kept in memory
• One job selected and run via job scheduling
• When it has to wait (for I/O for example), OS switches to another job
• Timesharing (multitasking) is logical extension in which CPU switches jobs
so frequently that users can interact with each job while it is running,
creating interactive computing
• Response time should be < 1 second
• Each user has at least one program executing in memory process
• If several jobs ready to run at the same time  CPU scheduling
• If processes don’t fit in memory, swapping moves them in and out to run
• Virtual memory allows execution of processes not completely in memory

• Performance of Various Levels of Storage


• Figure 12 shows the comparison of storage performance in large
workstations and small servers such as registers, cache, main memory, and
disk storage.
• Movement between levels of storage hierarchy can be explicit or implicit
• Information is normally kept in some storage system (such as main memory).
As it is used, it is copied into a faster storage system—the cache—on a
temporary basis. When we need a particular piece of information, we first
check whether it is in the cache. If it is, we use the information directly from
the cache. If it is not, we use the information from the source, putting a copy
in the cache under the assumption that we will need it again soon.

Figure 12. Performance of Various Levels of Storage

Migration of Integer A from Disk to Register


• Multitasking environments must be careful to use most recent value, no
matter where it is stored in the storage hierarchy

• Figure 13 shows the operation that copying A to the cache and to an internal
register. Thus, the copy of A appears in several places: on the magnetic disk,
in main memory, in the cache, and in an internal register. Once the increment
takes place in the internal register, the value of A differs in the various storage
systems. The value of A becomes the same only after the new value of A is
written from the internal register back to the magnetic disk.

24
Figure 13. Movement of Integer A from Disk to Register

• Multiprocessor environment must provide cache coherency in hardware


such that all CPUs have the most recent value in their cache
• Distributed environment situation even more complex
• Several copies of a datum can exist

Multiprogrammed System
• Multiprogramming needed for efficiency
o Single user cannot keep CPU and I/O devices busy at all times
o Multiprogramming organizes jobs (code and data) so CPU always has
one to execute
o A subset of total jobs in system is kept in memory
o One job selected and run via job scheduling
o When it has to wait (for I/O for example), OS switches to another job

• Timesharing (multitasking) is logical extension in which CPU switches jobs


so frequently that users can interact with each job while it is running,
creating interactive computing
o Response time should be < 1 second
o Each user has at least one program executing in memory process
o If several jobs ready to run at the same time  CPU scheduling
o If processes don’t fit in memory, swapping moves them in and out to
run
o Virtual memory allows execution of processes not completely in
memory
• Figure 14 shows the operating system keeps several jobs in memory
simultaneously. Since, in general, main memory is too small to accommodate
all jobs, the jobs are kept initially on the disk in the job pool. This pool consists
of all processes residing on disk awaiting allocation of main memory.

Figure 14. Memory Layout for Multiprogrammed System

25
Operating-System Operations

Modern operating systems are interrupt driven. If there are no processes to


execute, no I/O devices to service, and no users to whom to respond, an operating
system will sit quietly, waiting for something to happen. Events are almost always
signaled by the occurrence of an interrupt or a trap. A trap (or an exception) is a
software-generated interrupt caused either by an error (for example, division by zero or
invalid memory access) or by a specific request from a user program that an operating-
system service be performed. The interrupt-driven nature of an operating system defines
that system’s general structure. For each type of interrupt, separate segments of code
in the operating system determine what action should be taken. An interrupt service
routine is provided to deal with the interrupt.

Since the operating system and the users share the hardware and software
resources of the computer system, we need to make sure that an error in a user program
could cause problems only for the one program running. With sharing, many processes
could be adversely affected by a bug in one program. For example, if a process gets stuck
in an infinite loop, this loop could prevent the correct operation of many other processes.
More subtle errors can occur in a multiprogramming system, where one erroneous
program might modify another program, the data of another program, or even the
operating system itself.

Without protection against these sorts of errors, either the computer must
execute only one process at a time or all output must be suspect. A properly designed
operating system must ensure that an incorrect (or malicious) program cannot cause
other programs to execute incorrectly.

In other words:
• Interrupt driven by hardware (most of the modern computers)
• Software error or request creates exception or trap
o Division by zero, request for operating system service
• Other process problems include infinite loop, processes modifying each
other or the operating system

A. Dual-mode and Multimode operation


o Dual-mode operation allows OS to protect itself and other system
components
▪ User mode and kernel mode
▪ Mode bit provided by hardware
• Provides ability to distinguish when system is running
user code or kernel code
• Some instructions designated as privileged, only
executable in kernel mode
• System call changes mode to kernel, return from call
resets it to user
o Figure 15 shows that we can now see the life cycle of instruction
execution in a computer system. Initial control resides in the operating
system, where instructions are executed in kernel mode. When control
is given to a user application, the mode is set to user mode. Eventually,
control is switched back to the operating system via an interrupt, a
trap, or a system call.

26
Figure 15. Transition from user to kernel mode.

System calls provide the means for a user program to ask the operating
system to perform tasks reserved for the operating system on the user program’s
behalf. A system call is invoked in a variety of ways, depending on the
functionality provided by the underlying processor. In all forms, it is the method
used by a process to request action by the operating system. A system call usually
takes the form of a trap to a specific location in the interrupt vector.

This trap can be executed by a generic trap instruction, although some


systems (such as MIPS) have a specific syscall instruction to invoke a system call.
When a system call is executed, it is typically treated by the hardware as a
software interrupt. Control passes through the interrupt vector to a service
routine in the operating system, and the mode bit is set to kernel mode. The
system-call service routine is a part of the operating system. The kernel examines
the interrupting instruction to determine what system call has occurred; a
parameter indicates what type of service the user program is requesting.
Additional information needed for the request may be passed in registers, on the
stack, or in memory (with pointers to the memory locations passed in registers).
The kernel verifies that the parameters are correct and legal, executes the
request, and returns control to the instruction following the system call.

B. Timer
• Timer to prevent infinite loop / process hogging resources
• Set interrupt after specific period
• Operating system decrements counter
• When counter zero generate an interrupt
• Set up before scheduling process to regain control or
terminate program that exceeds allotted time

Process Management
• A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.
• Process needs resources to accomplish its task
o CPU, memory, I/O, files
• Initialization data
• Process termination requires reclaim of any reusable resources
• Single-threaded process has one program counter specifying location of
next instruction to execute
• Process executes instructions sequentially, one at a time, until completion
• Multi-threaded process has one program counter per thread
• Typically system has many processes, some user, some operating system
running concurrently on one or more CPUs
• Concurrency by multiplexing the CPUs among the processes / threads

Process Management Activities


A program does nothing unless its instructions are executed by a CPU. A program
in execution, as mentioned, is a process. A time-shared user program such as a compiler

27
is a process. A word-processing program being run by an individual user on a PC is a
process. A system task, such as sending output to a printer, can also be a process (or
at least part of one). For now, you can consider a process to be a job or a time-shared
program, but later you will learn that the concept is more general. It is possible to
provide system calls that allow processes to create subprocesses to execute
concurrently.

A process needs certain resources—including CPU time, memory, files, and I/O
devices—to accomplish its task. These resources are either given to the process when it
is created or allocated to it while it is running. In addition to the various physical and
logical resources that a process obtains when it is created, various initialization data
(input) may be passed along. For example, consider a process whose function is to
display the status of a file on the screen of a terminal. The process will be given the
name of the file as an input and will execute the appropriate instructions and system
calls to obtain and display the desired information on the terminal. When the process
terminates, the operating system will reclaim any reusable resources.

The operating system is responsible for the following activities in connection with
process management:
A. Creating and deleting both user and system processes
B. Suspending and resuming processes
C. Providing mechanisms for process synchronization
D. Providing mechanisms for process communication
E. Providing mechanisms for deadlock handling

Memory Management
• All data in memory before and after processing
• All instructions in memory in order to execute
• Memory management determines what is in memory when
o Optimizing CPU utilization and computer response to users
• Memory management activities
• Keeping track of which parts of memory are currently being used
and by whom
• Deciding which processes (or parts thereof) and data to move into
and out of memory
• Allocating and deallocating memory space as needed

Storage Management
Most modern computer systems use disks as the principal on-line storage
medium for both programs and data. Most programs—including compilers, assemblers,
word processors, editors, and formatters—are stored on a disk until loaded into memory.
They then use the disk as both the source and destination of their processing. Hence,
the proper management of disk storage is of central importance to a computer system.

Since secondary storage is used frequently, it must be used efficiently. The entire
speed of operation of a computer may hinge on the speeds of the disk subsystem and
the algorithms that manipulate that subsystem.

There are, however, many uses for storage that is slower and lower in cost (and
sometimes of higher capacity) than secondary storage. Backups of disk data, storage of
seldom-used data, and long-term archival storage are some examples. Magnetic tape
drives and their tapes and CD and DVD drives and platters are typical tertiary storage
devices. The media (tapes and optical platters) vary between WORM (write-once, read-
many-times) and RW (read– write) formats.

Tertiary storage is not crucial to system performance, but it still must be


managed. Some operating systems take on this task, while others leave tertiary-storage
management to application programs. Some of the functions that operating systems can
provide include mounting and unmounting media in devices, allocating and freeing the

28
devices for exclusive use by processes, and migrating data from secondary to tertiary
storage.
• OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape drive)
▪ Varying properties include access speed, capacity, data-
transfer rate, access method (sequential or random)

A. File-System management
• Files usually organized into directories
• Access control on most systems to determine who can access what
• OS activities include
o Creating and deleting files and directories
o Primitives to manipulate files and dirs. (directory)
o Mapping files onto secondary storage
o Backup files onto stable (non-volatile) storage media

B. Mass-Storage Management
• Usually disks used to store data that does not fit in main memory
or data that must be kept for a “long” period of time.
• Proper management is of central importance
• Entire speed of computer operation hinges on disk subsystem and
its algorithms
• OS activities
o Free-space management
o Storage allocation
o Disk scheduling
• Some storage need not be fast
o Tertiary storage includes optical storage, magnetic tape
o Still must be managed
o Varies between WORM (write-once, read-many-times) and
RW (read-write)
• I/O Subsystem
• One purpose of OS is to hide peculiarities of hardware devices
from the user
• I/O subsystem responsible for
o Memory management of I/O including buffering (storing
data temporarily while it is being transferred), caching (storing parts
of data in faster storage for performance), spooling (the overlapping of
output of one job with input of other jobs)
o General device-driver interface
o Drivers for specific hardware devices

Protection and Security


• Protection – any mechanism for controlling access of processes or users
to resources defined by the OS
• Security – defense of the system against internal and external attacks
o Huge range, including denial-of-service, worms, viruses, identity
theft, theft of service
• Systems generally first distinguish among users, to determine who can
do what
o User identities (user IDs, security IDs) include name and
associated number, one per user
o User ID then associated with all files, processes of that user to
determine access control
o Group identifier (group ID) allows set of users to be defined and
controls managed, then also associated with each process, file
o Privileged escalation allows user to change to effective ID with
more rights

29
Computing Environments
A. Traditional Computing
o Blurring over time
o Office environment
▪ PCs connected to a network, terminals attached to
mainframe or minicomputers providing batch and timesharing
▪ Now portals allowing networked and remote systems access
to same resources
o Home networks
▪ Used to be single system, then modems
▪ Now firewalled, networked

Figure 16. General structure of a traditional computing system

B. Mobile Computing
• Handheld smartphones, tablets, etc
• What is the functional difference between them and a
“traditional” laptop?
• Extra feature – more OS features (GPS, gyroscope)
• Allows new types of apps like augmented reality
• Use IEEE 802.11 wireless, or cellular data networks for
connectivity
• Leaders are Apple iOS and Google Android

C. Distributed Computing
• Collection of separate, possibly heterogeneous, systems
networked together
▪ Network is a communications path, TCP/IP most
common
▪ Local Area Network (LAN)
▪ Wide Area Network (WAN)
▪ Metropolitan Area Network (MAN)
▪ Personal Area Network (PAN)
• Network Operating System provides features between
systems across network
▪ Communication scheme allows systems to exchange
messages
▪ Illusion of a single system

D. Client-Server Computing
o Dumb terminals supplanted by smart PCs
o Many systems now servers, responding to requests generated by
clients
o Compute-server provides an interface to client to request services
(i.e. database)
o File-server provides interface for clients to store and retrieve files

30
Figure 17. Client – Server Computing

E. Peer-to-Peer Computing
o Another model of distributed system
o P2P does not distinguish clients and servers
▪ Instead all nodes are considered peers
▪ May each act as client, server or both
▪ Node must join P2P network
• Registers its service with central lookup service on
network, or
• Broadcast request for service and respond to requests
for service via discovery protocol

Figure 18. Examples include Napster and Gnutella

F. Virtualization
• Allows operating systems to run applications within other OSes
o Vast and growing industry
• Emulation used when source CPU type different from target type
(i.e. PowerPC to Intel x86)
o Generally slowest method
o When computer language not compiled to native code –
Interpretation
• Virtualization – OS natively compiled for CPU, running guest
OSes also natively compiled
o Consider VMware running WinXP guests, each running
applications, all on native WinXP host OS
o VMM (virtual machine Manager) provides virtualization
services
• Use cases involve laptops and desktops running multiple OSes for
exploration or compatibility
o Apple laptop running Mac OS X host, Windows as a guest
▪ Developing apps for multiple OSes without having
multiple systems
▪ QA testing applications without having multiple systems

31
▪ Executing and managing compute environments within
data centers
o VMM can run natively, in which case they are also the host
▪ There is no general purpose host then (VMware ESX and
Citrix XenServer)

Figure 19. Virtual Machine Hardware (VMware)

G. Cloud Computing
• Web has become ubiquitous
• PCs most prevalent devices
• More devices becoming networked to allow web access
• New category of devices to manage web traffic among similar
servers: load balancers
• Use of operating systems like Windows 95, client-side, have
evolved into Linux and Windows XP, which can be clients and
servers
• Delivers computing, storage, even apps as a service across a
network
• Logical extension of virtualization because it uses virtualization as
the base for it functionality.
• Amazon EC2 has thousands of servers, millions of virtual
machines, petabytes of storage available across the Internet, pay
based on usage

Many types
• Public cloud – available via Internet to anyone willing to pay
• Private cloud – run by a company for the company’s own use
• Hybrid cloud – includes both public and private cloud
components
• Software as a Service (SaaS) – one or more applications available
via the Internet (i.e., word processor)
• Platform as a Service (PaaS) – software stack ready for application
use via the Internet (i.e., a database server)
• Infrastructure as a Service (IaaS) – servers or storage available
over Internet (i.e., storage available for backup use)
• Cloud computing environments composed of traditional OSes,
plus VMMs, plus cloud management tools
• Internet connectivity requires security like firewalls
• Load balancers spread traffic across multiple applications

32
Figure 20. Cloud Computing.

These cloud-computing types are not discrete, as a cloud computing


environment may provide a combination of several types. For example, an
organization may provide both SaaS and IaaS as a publicly available
service.

Certainly, there are traditional operating systems within many of


the types of cloud infrastructure. Beyond those are the VMMs that manage
the virtual machines in which the user processes run. At a higher level,
the VMMs themselves are managed by cloud management tools, such as
Vware vCloud Director and the open-source Eucalyptus toolset. These
tools manage the resourceswithin a given cloud and provide interfaces to
the cloud components, making a good argument for considering them a
new type of operating system.

Figure 20 illustrates a public cloud providing IaaS. Notice that both


the cloud services and the cloud user interface are protected by a firewall.

H. Real-Time Embedded Systems


• Real-time embedded systems most prevalent form of computers
• Vary considerable, special purpose, limited purpose OS, real-time
OS
• Use expanding
• Many other special computing environments as well
• Some have OSes, some perform tasks without an OS
• Real-time OS has well-defined fixed time constraints
• Processing must be done within constraint
• Correct operation only if constraints met

Open-Source Operating Systems


• Operating systems made available in source-code format rather than just
binary closed-source
• Counter to the copy protection and Digital Rights Management (DRM)
movement
• Started by Free Software Foundation (FSF), which has “copyleft” GNU
Public License (GPL)
• Examples include GNU/Linux and BSD UNIX (including core of Mac OS X),
and many more

33
• Can use VMM like VMware Player (Free on Windows), Virtualbox (open
source and free on many platforms - http://www.virtualbox.com)
o Use to run guest operating systems for exploration

SUMMARY

An operating system is software that manages the computer hardware as well as


providing an environment for application programs to run. Perhaps the most visible
aspect of an operating system is the interface to the computer system, it provides to the
human user.

For a computer to do its job of executing programs, the programs must be in


main memory. Main memory is the only large storage area that the processor can access
directly. It is an array of words or bytes, ranging in size from millions to billions. Each
word in memory has its own address. The main memory is usually a volatile storage
device that loses its contents when power is turned off or lost. Most computer systems
provide secondary storage as an extension of main memory. Secondary storage provides
a form of non-volatile storage that is capable of holding large quantities of data
permanently. The most common secondary-storage device is a magnetic disk, which
provides storage of both programs and data.

The wide variety of storage systems in a computer system can be organized in a


hierarchy according to speed and cost. The higher levels are expensive, but they are
fast. As we move down the hierarchy, the cost per bit generally decreases, whereas the
access time generally increases.

There are several different strategies for designing a computer system.


Uniprocessor systems have only a single processor while multiprocessor systems
contain two or more processors that share physical memory and peripheral devices. The
most common multiprocessor design is symmetric multiprocessing (or SMP), where all
processors are considered peers and run independently of one another. Clustered
systems are a specialized form of multiprocessor systems and consist of multiple
computer systems connected by a local area network.

To best utilize the CPU, modern operating systems employ multiprogramming/


which allows several jobs to be in memory at the same time, thus ensuring the CPU
always has a job to execute. Timesharing systems are an extension of multiprogramming
whereby CPU scheduling algorithms rapidly switch between jobs, thus providing the
illusion each job is running concurrently.

The operating system must ensure correct operation of the computer system. To
prevent user programs from interfering with the proper operation of the system, the
hardware has two modes: user mode and kernel mode. Various instructions (such as
I/O instructions and halt instructions) are privileged and can be executed only in kernel
mode. The memory in which the operating system resides must also be protected from
modification by the user. A timer prevents infinite loops. These facilities (dual mode,
privileged instructions, memory protection, and timer interrupt) are basic building
blocks used by operating systems to achieve correct operation.

A process (or job) is the fundamental unit of work in an operating system. Process
management includes creating and deleting processes and providing mechanisms for
processes to communicate and synchronize with another. An operating system manages
memory by keeping track of what parts of memory are being used and by whom. The
operating system is also responsible for dynamically allocating and freeing memory
space. Storage space is also managed by the operating system and this includes
providing file systems for representing files and directories and managing space on mass
storage devices.

Operating systems must also be concerned with protecting and securing the
operating system and users. Protection are mechanisms that control the access of

34
processes or users to the resources made available by the computer system. Security
measures are responsible for defending a computer system from external or internal
attacks.

Several data structures that are fundamental to computer science are widely
used in operating systems, including lists, stacks, queues, trees, hash functions, maps,
and bitmaps.

Computing takes place in a variety of environments. Traditional computing


involves desktop and laptop PCs, usually connected to a computer network.

Mobile computing refers to computing on handheld smartphones and tablet


computers, which offer several unique features. Distributed systems allow users to
share resources on geographically dispersed hosts connected via a computer network.
Services may be provided through either the client–server model or the peer-to-peer
model. Virtualization involves abstracting a computer’s hardware into several different
execution environments. Cloud computing uses a distributed system to abstract
services into a “cloud,” where users may access the services from remote locations. Real-
time operating systems are designed for embedded environments, such as consumer
devices, automobiles, and robotics.

The free software movement has created thousands of open-source projects,


including operating systems. Because of these projects, students are able to use source
code as a learning tool. They can modify programs and test them, help find and fix bugs,
and otherwise explore mature, full-featured operating systems, compilers, tools, user
interfaces, and other types of programs.

GNU/Linux and BSD UNIX are open-source operating systems. The advantages
of free software and open sourcing are likely to increase the number and quality of open-
source projects, leading to an increase in the number of individuals and companies that
use these projects.

Distributed systems allow users to share resources on geographically dispersed


hosts connected via a computer network. Services may be provided through either the
client-server model or the peer-to-peer model. In a clustered system, multiple
machines can perform computations on data residing on shared storage, and
computing can continue even when some subset of cluster members fails.

LANs and WANs are the two basic types of networks. LANs enable processors
distributed over a small geographical area to communicate, whereas WANs allow
processors distributed over a larger area to communicate. LANs typically are faster
than WANs.

There are several computer systems that serve specific purposes. These include
real-time operating systems designed for embedded environments such as consumer
devices, automobiles, and robotics. Real-time operating systems have well defined,
fixed time constraints. Processing must be done within the defined constraints, or the
system will fail. Multimedia systems involve the delivery of multimedia data and often
have special requirements of displaying or playing audio, video, or synchronized audio
and video streams.

Recently, the influence of the Internet and the World Wide Web has encouraged
the development of modern operating systems that include web browsers and
networking and communication software as integral features.

If you want to know more interesting facts about VMware r virtual


appliances, visit the following:
• In addition, the rise of virtualization as a mainstream (and
frequently free) computer function makes it possible to run many
operating systems on top of one core system. For example, VMware

35
( http://www.vmware.com) provides a free “player” for Windows on
which hundreds of free “virtual appliances” can run. Virtualbox (
http://www.virtualbox.com) provides a free, opensource virtual
machine manager on many operating systems. Using such tools,
students can try out hundreds of operating systems without
dedicated hardware.
• Additional resources such as Powerpoint presentation and ebook
will be given for further explanation of the module.
• Operating system. http://www.wiley.com/college and clicking
“Who’s my rep?”
• Operating System. http://www.os-book.com.

It’s Your Turn.


Direction: Read the passage carefully and plan what you will write. Place your answer
in the pad paper whether yellow or white to be submitted. Each question has 10 points
each. The Essay rubrics have a correspond points that will guide you in your essay.
Features 9-10 points 7-8 points 4-6 points 1-3 points
Expert Accomplished Capable Beginner
Understanding Writing shows Writing shows a Writing shows Writing shows
strong clear adequate little
understanding understanding understanding understanding
Quality of Piece was Piece was written Piece had little Piece had no style
Writing written in an in an interesting style
extraordinary style
style
Gives no new
Very informative Somewhat Gives some new information and
and well- informative and information but very poorly
organized organized poorly organized organized
Grammar, Virtually no Few spelling and A number of So many spelling,
Usage & spelling, punctuation spelling, punctuation and
Mechanics punctuation or errors, minor punctuation or grammatical
grammatical grammatical grammatical errors that it
errors errors errors interferes with the
meaning

1. In a multiprogramming and time-sharing environment, several users


share the system simultaneously. This situation can result in various security
problems.
a. What are two such problems?
b. Can we ensure the same degree of security in a time-shared
machine as in a dedicated machine? Explain your answer.

36
2. The issue of resource utilization shows up in different forms in different
types of operating systems. List what resources must be managed carefully in the
following settings:
a. Mainframe or minicomputer systems
b. Workstations connected to servers
c. Handheld computers

3. Under what circumstances would a user be better off using a time sharing
system rather than a PC or single-user workstation?

4. Which of the functionalities listed below need to be supported by the


operating system for the following two settings: (a) handheld devices and (b) real-
time systems.
a. Batch programming
b. Virtual memory
c. Time sharing

5. Give one examples whether technology or gadgets that will commensurate


of the feature of platform technologies. Explain.

POST ASSESSMENT
Write the letters only. Place your answer in the separate pad paper to be submitted.

1. Acts as an intermediary between the computer user and the computer hardware
a. Operating System
b. Application Software
c. Facebook
d. Microsoft Office
2. Software-generated interrupt caused either by an error or a user request
a. Polling
b. Vectored
c. Disabled
d. trap
3. 8 bits equivalent of
a. 2 bytes
b. 1 kilobyte
c. 1 byte
d. 1 word
4. Rigid metal or glass platters covered with magnetics recording materials
a. Register
b. Magnetic disk
c. Magnetic tapes
d. Main memory
5. Logical extension in which CPU switches jobs so frequently that users can interact
with each job while it is running, creating interactive computing
a. Multitasking
b. I/O swapping
c. Virtual memory
d. Swapping
6. Mechanism for controlling access of process or users to resources defined by the OS
a. Privilege
b. Security
c. Escalation
d. Protection
7. Input/Output method once the I/O was interrupt, the I/O return control to the user.
a. Device-status table
b. Synchronous Method

37
c. Asynchronous Method
d. System calls

Directions: Fill each blank with the correct answer. Write your answer in a separate
clean paper for submission.
8. The devices run concurrently with CPU and interrupting CPU when done is
_____________.
9. It contains the addresses of all service routines is called _____________.

10. It indicates the device's type, address, and state such as not functioning, idle, or
busy is _____________.

11. Determines the logical interaction between the device and the computer is called
_____________.
12. Copying information into faster storage system is _____________.

13. Increasing the number of processors which we expect to get more work done in
less time is _____________.

14. It used for high-speed I/O devices able to transmit information at close to memory
speeds is known as ______________.

15. Faster storage of the data and information but classified as volatile is called
________________.

REFERENCES
Silberschatz, A. et. al. (2013). Operating System Concepts. 9th Edition. Hoboken, New
Jersey, USA: John Wiley & Sons, Inc.

Stallings, William (2012). Operating System Internals and Design Principles. 7th edition.
Upper Saddle River, New Jersey: Pearson Education, Inc.

38

You might also like