Platform Technologies Module 1
Platform Technologies Module 1
Module 1 of 4
PLATFORM TECHNOLOGIES
Brueckner B. Aswigue
This module will present the different features of platform technologies such as
abstraction, evolution, bundling, and interoperability. Platform technologies is an
environment for building and running applications, systems and processes. These can
be viewed as toolsets for developing and operating customized and tailored services.
Technology platform commonly known also as Operating System.
It will expound the brief meaning and descriptions of the different features of the
Operating System such as: computer – system organization, I/O structures, direct
memory access structure, direct memory access structures, computer –system
architecture, operating system structure, operating-system operations, process,
protection and security, computing environments, and open-source operating system.
This module presents the overview of basic components and features of the
different platform technologies. It also gives an extensive overview of the different
features of Operating System. Powerpoint presentation is also included to give you more
details of the module and also as your references.
The number of hours allotted for this module shall be for 3 hours. You are
expected to finish the module in two weeks.
LEARNING OUTCOMES
PRE-TEST
The following questions cover general areas of this module. You may not
know the answers to all questions, but please attempt to answer them without
asking others or referring to books.
Choose the best answer for each question and write the letter of your
choice after the number.
1
c. Facebook
d. Microsoft Office
3. Read-only memory (ROM) is general term as
a. Software
b. Firmware
c. Hardware
d. Harddsik
4. Software-generated interrupt caused either by an error or a user request
a. Polling
b. Vectored
c. Disabled
d. trap
5. Large storage media that the CPU can access directly
a. Main memory
b. Secondary storage
c. Hard disk
d. USB
6. 8 bits equivalent of
a. 2 bytes
b. 1 kilobyte
c. 1 byte
d. 1 word
2
d. Hard disk
14. Extension of storage media that provices large nonvolatile storage capacity.
a. Main Memory
b. Secondary storage
c. Third storage
d. Hard disk
15. How many bytes are there in the words “Testing lang po”.
a. 15 bytes
b. 50 bytes
c. 1500 bytes
d. 156 bytes
Objectives:
At the end of the lesson, you should be able to:
1. identify accurately the meaning of platform technologies;
2. discuss comprehensively the degree of abstraction, bundling,
interoperability and evolution of the platform;
3. identify accurately the meaning of OS; and,
4. discuss comprehensively the OS and its features
Let’s Engage.
Platform technology is a technology that enables the creation of products and
processes and serves as the basis for many other technologies such as automobile, big
data processing technology, biotechnology, nanotechnology, grid computing and ICT
(Information and Communication Technology). It establishes the long-term capabilities
of research & development institutes. It can be defined as a structural or technological
form from which various products can emerge without the expense of a new
process/technology introduction
With the rise of information technology and the ever-increasing complexity of our
technology landscape, platforms have become the design paradigm of choice for today's
complex engineered systems. We first saw the power of the platform model in the
development of the personal computer some twenty to thirty years ago as operating
system providers built their technology as a platform for software developers to create
applications on top. But it was not until the past decade with the widespread advent of
the internet that the platform model has truly come of age as virtually every internet
company from the biggest search giants to the smallest little social media widgets has
started to define their solution as a platform.
A computer system has many resources (hardware and software), which may be
to complete a task. The commonly required resources are input/output devices,
memory, file storage space, CPU etc. The operating system acts as a manager of the
above resources and allocates them to specific programs and users, whenever necessary
to perform a particular task. Therefore operating system is the resource manager where
it can manage the resource of a computer system internally. The resources are
processor, memory, files, and I/O devices. In simple terms, an operating system is the
interface between the user and the machine.
3
An operating system is software that manages the computer hardware. The
hardware must provide appropriate mechanisms to ensure the correct operation of the
computer system and to prevent user programs from interfering with the proper
operation of the system.
Internally, operating systems vary greatly in their makeup, since they are
organized along many different lines. The design of a new operating
system is a major task. It is important that the goals of the system be well
defined before the design begins. These goals form the basis for choices
among various algorithms and strategies.
ABSTRACTION
The key to the platform technology architecture is abstraction, as all
platform technologies involve two distinctly different levels to their design with
these different levels defined according to their degree of abstraction. Abstraction
is the quality of dealing with generic forms rather than specific events, details or
applications. In this respect, abstraction means removing the application of the
technology from the underlying processes and functions that support that
application.
The platform is an abstraction, meaning that in itself it does not have
application. For example, you might rent a cloud platform from a provider but in
itself, this is absolutely no use to an end-user they cannot do anything with it.
4
Platforms are composed of generic processes that do not have specific
instantiation. The application is designed to bundle these underlying resources
and turn them into a real instance of an application that can be applied in the
real world.
In the auto industry, for example, a car platform is a shared set of common
design, engineering, and production models from which many different specific
models can be created. In this way the car companies have abstracted away from
any specific type of car to create a two-tiered system; one level being generic the
other specific to any instance of that model.
This is a central aspect of the platform model, the creation of a generic
form or set of services, on the underlying platform level, and then on the
application level these services are bundled into different configurations and
customized to the specific needs of the end-user. In such a way a finite amount
of reusable abstract building blocks can be bundled and re-bundled on the
application layer.
This use of abstraction works to remove the complex for the application
developers. By moving core services down to the platform level application
developers can simply plug into these services and build their applications on top
of it, thus working to greatly simplify the complexity they encounter. We can think
about a house as a platform, once there are common protocols, Iot platforms for
houses will be built where any device, technology or item that enters into the
house can then connect into the platform and become an application, the house
platform can then manage these applications, providing them with infrastructure
services and present them as an integrated solution to the house occupants.
This core idea of abstraction is very powerful and can be applied to our
entire technology landscape, with smaller more specific technologies sitting on
top of others that work as the platform supporting them which, in turn, may also
sit on top of others that support them. For example, smart cities will become
platforms with houses being applications that draw upon the common physical
and information resource made available - such as parking, water, electricity etc
- but also the house itself will be a platform for all of the technologies within it
delivering services to them. Each layer in the hierarchy bundles the resources
provided to it from that below and delivers those resources as a service to the
applications that sit on top of it.
BUNDLING
A platform technology has been defined as a structure or technology from
which various products can emerge without the expense of a new process
introduction. This is achieved by defining a core set of building blocks and then
configuring them into different bundles depending on the context.
Effective platform technologies should work like Lego kits, where the
platform provides the elementary building blocks that are then bundled together
on the application level to meet the specific requirements of the end-user. For
example, in enterprise architecture, there is a framework called TOGAF that
defines any organization in terms of a set of building blocks that are essential to
the workings of any enterprise.
Developing an effective platform technology requires a coherent
understanding of what the core services are and thus what those basic building
blocks that any application will need are. This is not evident, it took us many
centuries of building enterprises before we came up with a generic model for the
building blocks of any enterprise outlined in TOGAF.
5
Platform design goes hand in hand with a service-oriented architecture,
where developers of applications treat the building blocks as services that they
then simply string together in different ways to build their solutions. For example,
today a new business can be relatively quickly and easily setup - at least
compared to a few decades ago - because of the core platform of the internet and
the many different services - building blocks - that are available for entrepreneurs
to bundle together into new solutions. For example, within just a few weeks, one
could create a new service by building a web application that draws upon services
from Twitter, for user identity, Etherium for secure transactions, Alibaba for
sourcing materials, Upwork for staffing etc. The fact that you don't have to build
all of these core components yourself, you are just plugging all of them together
means that you can easily and quickly reconfigure them when needed.
At the heart of the platform model is a distinction between the basic
functionalities of the technology and how those functions are composed; a
platform level that deals with the "what" and an application level that deals with
the "how". The basic functions of the technology - the building blocks - are the
"what" and the way those capabilities are strung together is the "how". One could
think of a fab lab as a platform technology, the materials are the "what" or
building blocks that are made available for people to construct into objects
through the use of the machines. The way they use those machines to process
the building blocks into finished products is the "how".
INTEROPERABILITY
Platforms are open systems, unlike traditional technologies that are simply
designed as individual physical objects that perform a function, platforms are
designed to be interoperable with other systems, and they will likely have external
applications running on top of them all of which cannot be fully foreseen by the
developers of the platform. Think of an IoT platform for a house which will have
to interoperate and work with many devices and technologies in the house if it is
to be successful at delivering the end service.
Previously technology was developed largely "in house" with each company
creating their own proprietary systems, delivering it to the end-user, trying to
create lock-in and compete with other companies who were also creating their
own systems. The industrial model of technology development was typically one
of high capital costs, long design and production cycles within closed
organizations creating proprietary technologies with limited interoperability
between the technologies of companies in the same industry.
Much of this industrial model was a product of the physical nature of the
technologies being developed which made them excludable and rivalrous in
nature. But the dematerialized nature of information technology makes it vastly
easier to duplicate and exchange information services and this has a very
profound impact on how we design and build the service systems of today. This
dematerialized nature makes them non-excludable and non-rivalrous which
creates a very different dynamic. The result is one of increasing cooperation as
most of the value is no longer inside of the organization or technology but
increasingly outside of it; the value is increasingly in the system's capacity to
interoperate with other systems.
A smartphone would not be very valuable if it could not connect to the
internet or run other people's applications on it. Most of the value that the end-
user gets from their smartphone is not created by the original technology
developer, but instead by other people connecting into that platform and building
things on it, or that system connecting to other systems - such as to web pages -
with the user getting value from that interoperability and connectivity instead of
so much the device itself. In such a way interoperability becomes key and the
6
platform model is designed to optimize for this fluid connectivity between the core
technology and other systems built on it or external to it.
With information technology, we can build once and deploy many times,
almost anywhere at very little cost per extra unit. Facebook can build their
software platform once and through the internet, it can be accessed and used as
a service at extremely little extra cost, per person, to them. With information
technology, the marginal cost of the extra user often goes to almost zero.
In the stand-alone, individual technology paradigm of the industrial
economy the same or similar technology was developed by many different
companies and then they competed for market share. But with service systems a
new model is emerging, that of platform technologies, where various modular
services are made available that can be then plugged into platforms to be
delivered as an integrated experience to the end-user. Interoperability and
collaboration between systems are the key ingredients, and platforms facilitate
this by allowing different technologies to plug into each other and draw upon
their services in a seamless fashion.
A platform technology architecture - being open - is optimized for user
generated systems. A suit of new technologies from solar cells to distributed
manufacturing and virtually all forms of information technology are driving a
revolution in user-generated systems, as capabilities get pushed out to the edges
of networks. In a previous age of centralization when productive capabilities were
centered within large organizations and required large amounts of fixed capital,
then closed organizations made sense.
But today with the rise of the prosumer it is becoming critical that
organizations, business processes and all kinds of technology are able to harness
the input of the end-user if they are to continue to thrive in this world of
distributed technologies. The most successful technology providers of tomorrow
will be those who are able to harness this mass of new capacity and capabilities
on the long tail by providing them with the tools, know-how, methods and
connectivity to participate, and the platform model is idea for this. Sell a man a
fish and you make a bit of money and he feeds himself for a day, give a man a
fishing rod and he can feed himself and you can sell him on-going services for his
fishing rod, also he can help you in developing new innovations for your fishing
rod.
EVOLUTION
The world is speeding up, as the pace of change gets ever faster the
requirement to be able to meet that fast paced change will become ever more
important. Adaptive capacity and agility are, and will increasingly be seen as a
key requirement, if not the key requirement, in the coming decades. In stable and
predictable environments technologies can be built as homogeneous systems
without the capacity to change, enabling them to be optimized for efficiency
within one environment. But as that environment and the pace of innovation
changes, faster and faster, this homogenous architecture appears less viable and
there is a need to switch to a platform model to enable fast-paced innovation at
the edges of networks.
The goal of what we are trying to achieve may stay the same, people will
always want food, clothing, housing, entertainment etc. but in fast paced
environments the context stays changing and thus how we achieve that goal will
change, making it important to be able to bundle and unbundle building blocks
in new ways, quickly and easily.
Platform technologies can be every effective at enabling an evolutionary
process of technology development. The fact that they separate the current
7
application from generic processes means that as demands change old
applications can be retired and new more relevant technologies can be built
quickly.
Applications are instances of a technology, an instance is a specific
configuration of a technology for a specific application. For example, a
wheelbarrow is an instance of a technology, it has a specific shape, size, and form
which cannot be easily reconfigured. Any technology that is an application or
instance will go through a linear lifecycle because it does not have the capacity
to reconfigure or regenerate itself.
Traditionally our technologies go through a linear lifecycle from cradle to
grave. For example, imagine building a bicycle, the first time you build it you just
build a whole bicycle from start to finish. Then you have to build another one but
this time for a child, the fact that it has different requirements means you have
to start again from scratch.
While Facebook focused on creating a robust platform that allowed
outside developers to build new applications, Myspace did everything itself.
`We tried to create every feature in the world and said, O.K., we can do it,
why should we let a third party do it?' says MySpace cofounder DeWolfe.
`We should have picked 5 to 10 key features that we totally focused on and
let other people innovate on everything else' – Business Week: The Rise &
Ignominius Fall of MySpace
Every time that you have to build a new bicycle with new requires you start
to get a better idea of what in the development of this technology is a core feature
that remains unchanged and what changes with each different end-user
requirement. If you go on developing bicycles long enough you will eventually
start to see emerging a core platform that remains unchanged and an application
level that changes. When we combine this feature of abstraction to platforms with
user-generated solutions we start to get the potential for a truly evolutionary
development process.
By building a flexible modular structure where components can be easily
bundled and re-bundled, and by putting this closer to the end-user, it allows for
much faster feedback and iteration in the development process. With respect to
agility and adaptability, the key design innovation of platform technologies is in
separating what is permanent from what is contingent and temporal, so as to
build a stable core that can support the rapid reconfiguration of applications
depending on the context and thus make the system agile.
The platform model of reusable building blocks and bundling allows also
for rapid innovation on the application level. The more solid the building blocks
and the lighter the links between them the easier it is to take them apart, change
them around and experiment. The bundling model to platforms offers the
possibility of developing more sustainable technology solutions in that it focuses
our attention on the reusability of systems components. In bundling we can
design to create a light, loosely coupled set of connections - a network - between
the parts during their composition that can be easily dismantled giving us once
again a set of building blocks that can be potentially endlessly bundled into new
solutions.
In such a model we are essentially separating the organizational structure
from the component parts so as to avoid the linear life cycle of a homogeneous
system by being able to unbundle them quickly and easily. In a more traditional
homogenous system, the organization's structure and building blocks are more
or less one. For example, think about a house made out of blocks and mortar for
connecting them, the house will go through a linear lifecycle from cradle to grave
8
where it will be knocked down because it is difficult to separate the building
blocks from how they were composed so as to recompose them. Homogeneous
systems may well be the cheapest option and optimized for efficiency but as
adaptive capacity, resilience and sustainability increase in importance, a
modular platform model may become what greatly more effective in this respect.
To truly understand what operating systems are, we must first understand how
they have developed. In this chapter, after offering a general description of what
operating systems do, we trace the development of operating systems from the first
hands-on systems through multiprogrammed and time-shared systems to PCs and
handheld computers. We also discuss operating system variations, such as parallel,
real-time, and embedded systems. As we move through the various stages, we see how
the components of operating systems evolved as natural solutions to problems in early
computer systems.
We begin our discussions by looking at the operating system's role in the overall
9
The hardware—the central processing unit (CPU), the memory, and the
input/output (I/O) devices—provides the basic computing resources for the
system. The application programs—such as word processors, spreadsheets,
compilers, and web browsers—define the ways in which these resources are used
to solve users' computing problems. The operating system controls and
coordinates the use of the hardware among the various application programs for
the various users.
The operating system provides the means for proper use of these resources
in the operation of the computer system. An operating system is similar to a
government. Like a government, it performs no useful function by itself. It simply
provides an environment within which other programs can do useful work.
User View
The user's view of the computer varies according to the interface being
used. Most computer users sit in front of a PC, consisting of a monitor, keyboard,
mouse, and system unit. Such a system is designed for one user to monopolize
its resources.
The goal is to maximize the work (or play) that the user is performing. In
this case, the operating system is designed mostly for ease of use, with some
attention paid to performance and none paid to resource utilization—how
various hardware and software resources are shared. Performance is, of course,
important to the user; but the demands placed on the system by a single user
are too low to make resource utilization an issue. In some cases, a user sits at a
terminal connected to a mainframe or minicomputer. Other users are accessing
the same computer through other terminals. These users share resources and
may exchange information.
System View
From the computer's point of view, the operating system is the program
most intimately involved with the hardware. In this context, we can view an
operating system as a resource allocator. A computer system has many
10
resources—hardware and software—that may be required to solve a problem:
CPU time, memory space, file-storage space, I/O devices, and so on. The
operating system acts as the manager of these resources. Facing numerous and
possibly conflicting requests for resources, the operating system must decide how
to allocate them to specific programs and users so that it can operate the
computer system efficiently and fairly. As we have seen, resource allocation is
especially important where many users access the same mainframe or
minicomputer.
A more common definition is that the operating system is the one program
running at all times on the computer (usually called the kernel), with all else
being systems programs and application programs.
This last definition is the one that we generally follow. The matter of what
constitutes an operating system has become increasingly important. In 1998, the
United States Department of Justice filed suit against Microsoft, in essence
claiming that Microsoft included too much functionality in its operating systems
and thus prevented application vendors from competing. For example, a web
browser was an integral part of the operating system. As a result, Microsoft was
found guilty of using its operating system monopoly to limit competition.
System Goals
It is easier to define an operating system by what it does than by what it
is, but even this can be tricky. The primary goal of some operating systems is
convenience for the user. Operating systems exist because computing with them
is supposedly easier than computing without them. As we have seen, this view is
particularly clear when you look at operating systems for small PCs. The primary
goal of other operating systems is efficient operation of the computer system. This
is the case for large, shared, multiuser systems. These systems are expensive, so
it is desirable to make them as efficient as possible. These two goals—convenience
and efficiency—are sometimes contradictory. In the past, efficiency was often
11
more important than convenience (Section 1.2.1). Thus, much of operating-
system theory concentrates on optimal use of computing resources.
Operating systems have also evolved over time in ways that have affected
system goals. For example, UNIX started with a keyboard and printer as its
interface, limiting its convenience for users. Over time, hardware changed, and
UNIX was ported to new hardware with more user-friendly interfaces. Many
graphic user interfaces (GUIs) were added, allowing UNIX to be more convenient
to use while still concentrating on efficiency.
To examine more closely what operating systems are and what they do, we
next consider how they have developed over the past 50 years. By tracing that
evolution, we can identify the common elements of operating systems and see
how and why these systems have developed as they have.
In the early days of computer use, computers were huge machines that
are expensive to buy, run and maintain. Computer used in single user, interactive
mode. Programmers interact with the machine at a very low level - flick console
switches, dump cards into card reader, etc. The interface is basically the raw
hardware.
12
• Solution: Build a batch monitor. Store jobs on a disk (spooling), have
computer read them in one at a time and execute them. Big change in computer
usage: debugging now done offline from print outs and memory dumps. No
more instant feedback.
• Problem: At any given time, job is actively using either the CPU or an I/O
device, and the rest of the machine is idle and therefore unutilized.
• Solution: Allow the job to overlap computation and I/O. Buffering and
interrupt handling added to subroutine library.
• Problem: one job can't keep both CPU and I/O devices busy. (Have
compute-bound jobs that tend to use only the CPU and I/O-bound jobs that
tend to use only the I/O devices.) Get poor utilization either of CPU or I/O
devices.
• Solution: multiprogramming - several jobs share system. Dynamically
switch from one job to another when the running job does I/O. Big issue:
protection. Don't want one job to affect the results of another. Memory
protection and relocation added to hardware, OS must manage new hardware
functionality. OS starts to become a significant software system. OS also starts
to take up significant resources on its own.
• Problem: Old batch schedulers were designed to run a job for as long as
it was utilizing the CPU effectively (in practice, until it tried to do some I/O).
But now, people need reasonable response time from the computer.
• Solution: Preemptive scheduling.
• Problem: People need to have their data and programs around while they
use the computer.
• Solution: Add file systems for quick access to data. Computer becomes a
repository for data, and people don't have to use card decks or tapes to store
their data.
• Problem: The boss logs in and gets terrible response time because the
machine is overloaded.
• Solution: Prioritized scheduling. The boss gets more of the machine than
the peons. But, CPU scheduling is just an example of resource allocation
problems. The timeshared machine was full of limited resources (CPU time,
disk space, physical memory space, etc.) and it became the responsibility of
the OS to mediate the allocation of the resources. So, developed things like
disk and physical memory quotas, etc.
13
Hardware becomes cheaper and users more sophisticated. People need to share
data and information with other people. Computers become more information transfer,
manipulation and storage devices rather than machines that perform arithmetic
operations. Networking becomes very important, and as sharing becomes an important
part of the experience so does security. Operating systems become more sophisticated.
Start putting back features present in the old time sharing systems (OS/2, Windows
NT, even Unix).
Rise of network. Internet is a huge popular phenomenon and drives new ways of
thinking about computing. Operating system is no longer interface to the lower level
machine - people structure systems to contain layers of middleware. So, a Java API or
something similar may be the primary thing people need, not a set of system calls. In
fact, what the operating system is may become irrelevant as long as it supports the right
set of middleware.
Network computer. Concept of a box that gets all of its resources over the
network. No local file system, just network interfaces to acquire all outside data. So have
a slimmer version of OS.
In the future, computers will become physically small and portable. Operating
systems will have to deal with issues like disconnected operation and mobility. People
will also start using information with a psuedo-real time component like voice and video.
Operating systems will have to adjust to deliver acceptable performance for these new
forms of data.
14
Concurrency and asynchrony make operating systems very complicated
pieces of software. Operating systems are fundamentally non-deterministic and
event driven. Can be difficult to construct (hundreds of person-years of effort)
and impossible to completely debug. Examples of concurrency and asynchrony:
o I/O devices run concurrently with CPU, interrupting CPU when done.
o On a multiprocessor multiple user processes execute in parallel.
o Multiple workstations execute concurrently and communicate by
sending messages over a network. Protocol processing takes place
asynchronously.
The major problem facing computer science today is how to build large,
reliable software systems. Operating systems are one of very few examples of
existing large software systems, and by studying operating systems we may learn
lessons applicable to the construction of larger systems.
Computer-System Operation
A modern general-purpose computer system consists of one or more CPUs
and a number of device controllers connected through a common bus that
provides access to shared memory. Each device controller is in charge of a specific
type of device (for example, disk drives, audio devices, and video displays). The
CPU and the device controllers can execute concurrently, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller is
provided whose function is to synchronize access to the memory.
15
• I/O devices and the CPU can execute concurrently.
• Each device controller is in charge of a particular device type.
• Each device controller has a local buffer.
• CPU moves data from/to main memory to/from local buffers
• I/O is from the device to local buffer of controller.
• Device controller informs CPU that it has finished its operation by
causing an interrupt.
Interrupt Handling
• The operating system preserves the state of the CPU by storing registers
and the program counter.
• Determines which type of interrupt has occurred:
o polling
o vectored interrupt system
• Separate segments of code determine what action should be taken for
each type of interrupt
I/O Structure
• After I/O starts, control returns to user program only upon I/O
completion.
o Wait instruction idles the CPU until the next interrupt
o Wait loop (contention for memory access).
o At most one I/O request is outstanding at a time, no simultaneous
I/O processing.
• After I/O starts, control returns to user program without waiting for I/O
completion.
o System call – request to the operating system to allow user to wait
for I/O completion.
o Device-status table contains entry for each I/O device indicating
its type, address, and state.
o Operating system indexes into I/O device table to determine device
status and to modify table entry to include interrupt.
16
Two I/O Methods
This situation will occur, in general, as the result of a user process requesting
I/O.
Once the I/O is started, two courses of action are possible. In the simplest
case, the I/O is started; then, at I/O completion, control is returned to the user
process. This case is known as synchronous I/O. The other possibility, called
asynchronous I/O, returns control to the user program without waiting for the
I/O to complete. Figure 5 shows how it occurs the I/O then can continue while
other system operations.
loop:
Loop: jmp Loop
This tight loop simply continues until an interrupt occurs, transferring control to
another part of the operating system. Such a loop might also need to poll any I/O
devices that do not support the interrupt structure but that instead simply set a
flag in one of their registers and expect the operating system to notice that flag.
A better alternative is to start the I/O and then continue processing other
operating system or user program code—the asynchronous approach. A system
call is then needed to allow the user program to wait for I/O completion, if
desired. If no user programs are ready to run, and the operating system has no
other work to do, we still require the wait instruction or idle loop, as before. We
also need to be able to keep track of many I/O requests at the same time. For
this purpose, the operating system uses a table containing an entry for each I/O
device: the device-status table as shows in Figure 6. Each table entry indicates
the device's type, address, and state (not functioning, idle, or busy). If the device
is busy with a request, the type of request and other parameters will be stored in
the table entry for that device. Since it is possible for other processes to issue
requests to the same device, the operating system will also maintain a wait
queue—a list of waiting requests—for each I/O device.
17
requests waiting in the queue for this device, the operating system starts
processing the next request.
Finally, control is returned from the I/O interrupt. If a process was waiting
for thisrequest to complete (as recorded in the device-status table), we can now
return control to it. Otherwise, we return to whatever we were doing before the
I/O interrupt: to the execution of the user program or to the wait loop. In a time-
sharing system, the operating system could switch to another ready-to-run
process.
Storage Structure
o Main memory – only large storage media that the CPU can access
directly.
o Secondary storage – extension of main memory that provides large
nonvolatile storage capacity.
18
▪ a megabyte, or MB, is 1,0242 bytes or approx. 1,000KB = 1MB
▪ a gigabyte, or GB, is 1,0243 bytes or approx. 1,000MB = 1GB
▪ a terabyte, or TB, is 1,0244 bytes or approx. 1,000GB = 1TB
▪ a petabyte, or PB, is 1,0245 bytes or approx. 1,000TB = 1 PB
Computer manufacturers often round off these numbers and say that a
megabyte is 1 million bytes and a gigabyte is 1 billion bytes. Networking
measurements are an exception to this general rule; they are given in bits
(because networks move data a bit at a time).
• Magnetic disks – rigid metal or glass platters covered with magnetic recording
material
o Disk surface is logically divided into tracks, which are subdivided into
sectors.
o The disk controller determines the logical interaction between the device
and the computer.
• Each storage system provides the basic functions of storing a datum and of
holding that datum until it is retrieved at a later time. The main differences among
the various storage systems lie in speed, cost, size, and volatility.
• Figure 7 shows the wide variety of storage systems in a computer system can be
organized in a hierarchy according to speed and cost. The higher levels are expensive,
but they are fast. As we move down the hierarchy, the cost per bit generally
decreases, whereas the access time generally increases. This trade-off is reasonable;
if a given storage system were both faster and less expensive than another—other
properties being the same—then there would be no reason to use the slower, more
expensive memory. In fact, many early storage devices, including paper tape and
core memories, are relegated to museums now that magnetic tape and semiconductor
memory have become faster and cheaper. The top four levels of memory in Figure 7
may be constructed using semiconductor memory.
In addition to differing in speed and cost, the various storage systems are
either volatile or nonvolatile. As mentioned earlier, volatile storage loses its contents
when the power to the device is removed. In the absence of expensive battery and
generator backup systems, data must be written to nonvolatile storage for
safekeeping. In the hierarchy shown in Figure 7, the storage systems above the
19
electronic disk are volatile, whereas those below are nonvolatile. An electronic disk
can be designed to be either volatile or nonvolatile.
During normal operation, the electronic disk stores data in a large DRAM
array, which is volatile. But many electronic-disk devices contain a hidden magnetic
hard disk and a battery for backup power. If external power is interrupted, the
electronic-disk controller copies the data from RAM to the magnetic disk. When
external power is restored, the controller copies the data back into the RAM.
The design of a complete memory system must balance all the factors just discussed:
It must use only as much expensive memory as necessary while providing as much
inexpensive, nonvolatile memory as possible. Caches can be installed to improve
performance where a large access-time or transfer-rate disparity exists between two
components.
Caching
• Caching – copying information into faster storage system; main memory
can be viewed as a last cache for secondary storage.
• Important principle, performed at many levels in a computer (in
hardware, operating system, software)
• Information in use copied from slower to faster storage temporarily
• Faster storage (cache) checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
• Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy
20
Figure 8. A von Neumann architecture
Computer-System Architecture
▪ Most systems use a single general-purpose processor
• Most systems have special-purpose processors as well
Advantages include:
▪ Increased throughput. By increasing the number of processors, we expect to
get more work done in less time. The speed-up ratio with N processors is not
N, however; rather, it is less than N. When multiple processors cooperate on
a task, a certain amount of overhead is incurred in keeping all the parts
working correctly. This overhead, plus contention for shared resources, lowers
the expected gain from additional processors. Similarly, N programmers
working closely together do not produce N times the amount of work a single
programmer would produce.
• Two types:
o Asymmetric Multiprocessing – each processor is assigned a
specie task.
o Symmetric Multiprocessing – each processor performs all
tasks
21
Figure 9. Symmetric Multiprocessing Architecture
A Dual-Core Design
A recent trend in CPU design is to include multiple computing cores on a single
chip. Such multiprocessor systems are termed multicore. They can be more efficient
than multiple chips with single cores because on-chip communication is faster than
between-chip communication. In addition, one chip with multiple cores uses
significantly less power than multiple single-core chips.
In Figure 10 shows a dual-core design with two cores on the same chip. In this
design, each core has its own register set as well as its own local cache. Other designs
might use a shared cache or a combination of local and shared caches. Aside from
architectural considerations, such as cache, memory, and bus contention, these
multicore CPUs appear to the operating system as N standard processors. This
characteristic puts pressure on operating system designers—and application
programmers—to make use of those processing cores.
22
Figure 10. A dual-core design with two cores placed on the same chip.
Clustered Systems
• Like multiprocessor systems, but multiple systems working
together
o Usually sharing storage via a storage-area network (SAN)
o Provides a high-availability service which survives failures
▪ Asymmetric clustering has one machine in hot-
standby mode
▪ Symmetric clustering has multiple nodes running
applications, monitoring each other
o Some clusters are for high-performance computing (HPC)
▪ Applications must be written to use parallelization
o Some have distributed lock manager (DLM) to avoid
conflicting operations
23
• Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
• A subset of total jobs in system is kept in memory
• One job selected and run via job scheduling
• When it has to wait (for I/O for example), OS switches to another job
• Timesharing (multitasking) is logical extension in which CPU switches jobs
so frequently that users can interact with each job while it is running,
creating interactive computing
• Response time should be < 1 second
• Each user has at least one program executing in memory process
• If several jobs ready to run at the same time CPU scheduling
• If processes don’t fit in memory, swapping moves them in and out to run
• Virtual memory allows execution of processes not completely in memory
• Figure 13 shows the operation that copying A to the cache and to an internal
register. Thus, the copy of A appears in several places: on the magnetic disk,
in main memory, in the cache, and in an internal register. Once the increment
takes place in the internal register, the value of A differs in the various storage
systems. The value of A becomes the same only after the new value of A is
written from the internal register back to the magnetic disk.
24
Figure 13. Movement of Integer A from Disk to Register
Multiprogrammed System
• Multiprogramming needed for efficiency
o Single user cannot keep CPU and I/O devices busy at all times
o Multiprogramming organizes jobs (code and data) so CPU always has
one to execute
o A subset of total jobs in system is kept in memory
o One job selected and run via job scheduling
o When it has to wait (for I/O for example), OS switches to another job
25
Operating-System Operations
Since the operating system and the users share the hardware and software
resources of the computer system, we need to make sure that an error in a user program
could cause problems only for the one program running. With sharing, many processes
could be adversely affected by a bug in one program. For example, if a process gets stuck
in an infinite loop, this loop could prevent the correct operation of many other processes.
More subtle errors can occur in a multiprogramming system, where one erroneous
program might modify another program, the data of another program, or even the
operating system itself.
Without protection against these sorts of errors, either the computer must
execute only one process at a time or all output must be suspect. A properly designed
operating system must ensure that an incorrect (or malicious) program cannot cause
other programs to execute incorrectly.
In other words:
• Interrupt driven by hardware (most of the modern computers)
• Software error or request creates exception or trap
o Division by zero, request for operating system service
• Other process problems include infinite loop, processes modifying each
other or the operating system
26
Figure 15. Transition from user to kernel mode.
System calls provide the means for a user program to ask the operating
system to perform tasks reserved for the operating system on the user program’s
behalf. A system call is invoked in a variety of ways, depending on the
functionality provided by the underlying processor. In all forms, it is the method
used by a process to request action by the operating system. A system call usually
takes the form of a trap to a specific location in the interrupt vector.
B. Timer
• Timer to prevent infinite loop / process hogging resources
• Set interrupt after specific period
• Operating system decrements counter
• When counter zero generate an interrupt
• Set up before scheduling process to regain control or
terminate program that exceeds allotted time
Process Management
• A process is a program in execution. It is a unit of work within the system.
Program is a passive entity, process is an active entity.
• Process needs resources to accomplish its task
o CPU, memory, I/O, files
• Initialization data
• Process termination requires reclaim of any reusable resources
• Single-threaded process has one program counter specifying location of
next instruction to execute
• Process executes instructions sequentially, one at a time, until completion
• Multi-threaded process has one program counter per thread
• Typically system has many processes, some user, some operating system
running concurrently on one or more CPUs
• Concurrency by multiplexing the CPUs among the processes / threads
27
is a process. A word-processing program being run by an individual user on a PC is a
process. A system task, such as sending output to a printer, can also be a process (or
at least part of one). For now, you can consider a process to be a job or a time-shared
program, but later you will learn that the concept is more general. It is possible to
provide system calls that allow processes to create subprocesses to execute
concurrently.
A process needs certain resources—including CPU time, memory, files, and I/O
devices—to accomplish its task. These resources are either given to the process when it
is created or allocated to it while it is running. In addition to the various physical and
logical resources that a process obtains when it is created, various initialization data
(input) may be passed along. For example, consider a process whose function is to
display the status of a file on the screen of a terminal. The process will be given the
name of the file as an input and will execute the appropriate instructions and system
calls to obtain and display the desired information on the terminal. When the process
terminates, the operating system will reclaim any reusable resources.
The operating system is responsible for the following activities in connection with
process management:
A. Creating and deleting both user and system processes
B. Suspending and resuming processes
C. Providing mechanisms for process synchronization
D. Providing mechanisms for process communication
E. Providing mechanisms for deadlock handling
Memory Management
• All data in memory before and after processing
• All instructions in memory in order to execute
• Memory management determines what is in memory when
o Optimizing CPU utilization and computer response to users
• Memory management activities
• Keeping track of which parts of memory are currently being used
and by whom
• Deciding which processes (or parts thereof) and data to move into
and out of memory
• Allocating and deallocating memory space as needed
Storage Management
Most modern computer systems use disks as the principal on-line storage
medium for both programs and data. Most programs—including compilers, assemblers,
word processors, editors, and formatters—are stored on a disk until loaded into memory.
They then use the disk as both the source and destination of their processing. Hence,
the proper management of disk storage is of central importance to a computer system.
Since secondary storage is used frequently, it must be used efficiently. The entire
speed of operation of a computer may hinge on the speeds of the disk subsystem and
the algorithms that manipulate that subsystem.
There are, however, many uses for storage that is slower and lower in cost (and
sometimes of higher capacity) than secondary storage. Backups of disk data, storage of
seldom-used data, and long-term archival storage are some examples. Magnetic tape
drives and their tapes and CD and DVD drives and platters are typical tertiary storage
devices. The media (tapes and optical platters) vary between WORM (write-once, read-
many-times) and RW (read– write) formats.
28
devices for exclusive use by processes, and migrating data from secondary to tertiary
storage.
• OS provides uniform, logical view of information storage
o Abstracts physical properties to logical storage unit - file
o Each medium is controlled by device (i.e., disk drive, tape drive)
▪ Varying properties include access speed, capacity, data-
transfer rate, access method (sequential or random)
A. File-System management
• Files usually organized into directories
• Access control on most systems to determine who can access what
• OS activities include
o Creating and deleting files and directories
o Primitives to manipulate files and dirs. (directory)
o Mapping files onto secondary storage
o Backup files onto stable (non-volatile) storage media
B. Mass-Storage Management
• Usually disks used to store data that does not fit in main memory
or data that must be kept for a “long” period of time.
• Proper management is of central importance
• Entire speed of computer operation hinges on disk subsystem and
its algorithms
• OS activities
o Free-space management
o Storage allocation
o Disk scheduling
• Some storage need not be fast
o Tertiary storage includes optical storage, magnetic tape
o Still must be managed
o Varies between WORM (write-once, read-many-times) and
RW (read-write)
• I/O Subsystem
• One purpose of OS is to hide peculiarities of hardware devices
from the user
• I/O subsystem responsible for
o Memory management of I/O including buffering (storing
data temporarily while it is being transferred), caching (storing parts
of data in faster storage for performance), spooling (the overlapping of
output of one job with input of other jobs)
o General device-driver interface
o Drivers for specific hardware devices
29
Computing Environments
A. Traditional Computing
o Blurring over time
o Office environment
▪ PCs connected to a network, terminals attached to
mainframe or minicomputers providing batch and timesharing
▪ Now portals allowing networked and remote systems access
to same resources
o Home networks
▪ Used to be single system, then modems
▪ Now firewalled, networked
B. Mobile Computing
• Handheld smartphones, tablets, etc
• What is the functional difference between them and a
“traditional” laptop?
• Extra feature – more OS features (GPS, gyroscope)
• Allows new types of apps like augmented reality
• Use IEEE 802.11 wireless, or cellular data networks for
connectivity
• Leaders are Apple iOS and Google Android
C. Distributed Computing
• Collection of separate, possibly heterogeneous, systems
networked together
▪ Network is a communications path, TCP/IP most
common
▪ Local Area Network (LAN)
▪ Wide Area Network (WAN)
▪ Metropolitan Area Network (MAN)
▪ Personal Area Network (PAN)
• Network Operating System provides features between
systems across network
▪ Communication scheme allows systems to exchange
messages
▪ Illusion of a single system
D. Client-Server Computing
o Dumb terminals supplanted by smart PCs
o Many systems now servers, responding to requests generated by
clients
o Compute-server provides an interface to client to request services
(i.e. database)
o File-server provides interface for clients to store and retrieve files
30
Figure 17. Client – Server Computing
E. Peer-to-Peer Computing
o Another model of distributed system
o P2P does not distinguish clients and servers
▪ Instead all nodes are considered peers
▪ May each act as client, server or both
▪ Node must join P2P network
• Registers its service with central lookup service on
network, or
• Broadcast request for service and respond to requests
for service via discovery protocol
F. Virtualization
• Allows operating systems to run applications within other OSes
o Vast and growing industry
• Emulation used when source CPU type different from target type
(i.e. PowerPC to Intel x86)
o Generally slowest method
o When computer language not compiled to native code –
Interpretation
• Virtualization – OS natively compiled for CPU, running guest
OSes also natively compiled
o Consider VMware running WinXP guests, each running
applications, all on native WinXP host OS
o VMM (virtual machine Manager) provides virtualization
services
• Use cases involve laptops and desktops running multiple OSes for
exploration or compatibility
o Apple laptop running Mac OS X host, Windows as a guest
▪ Developing apps for multiple OSes without having
multiple systems
▪ QA testing applications without having multiple systems
31
▪ Executing and managing compute environments within
data centers
o VMM can run natively, in which case they are also the host
▪ There is no general purpose host then (VMware ESX and
Citrix XenServer)
G. Cloud Computing
• Web has become ubiquitous
• PCs most prevalent devices
• More devices becoming networked to allow web access
• New category of devices to manage web traffic among similar
servers: load balancers
• Use of operating systems like Windows 95, client-side, have
evolved into Linux and Windows XP, which can be clients and
servers
• Delivers computing, storage, even apps as a service across a
network
• Logical extension of virtualization because it uses virtualization as
the base for it functionality.
• Amazon EC2 has thousands of servers, millions of virtual
machines, petabytes of storage available across the Internet, pay
based on usage
Many types
• Public cloud – available via Internet to anyone willing to pay
• Private cloud – run by a company for the company’s own use
• Hybrid cloud – includes both public and private cloud
components
• Software as a Service (SaaS) – one or more applications available
via the Internet (i.e., word processor)
• Platform as a Service (PaaS) – software stack ready for application
use via the Internet (i.e., a database server)
• Infrastructure as a Service (IaaS) – servers or storage available
over Internet (i.e., storage available for backup use)
• Cloud computing environments composed of traditional OSes,
plus VMMs, plus cloud management tools
• Internet connectivity requires security like firewalls
• Load balancers spread traffic across multiple applications
32
Figure 20. Cloud Computing.
33
• Can use VMM like VMware Player (Free on Windows), Virtualbox (open
source and free on many platforms - http://www.virtualbox.com)
o Use to run guest operating systems for exploration
SUMMARY
The operating system must ensure correct operation of the computer system. To
prevent user programs from interfering with the proper operation of the system, the
hardware has two modes: user mode and kernel mode. Various instructions (such as
I/O instructions and halt instructions) are privileged and can be executed only in kernel
mode. The memory in which the operating system resides must also be protected from
modification by the user. A timer prevents infinite loops. These facilities (dual mode,
privileged instructions, memory protection, and timer interrupt) are basic building
blocks used by operating systems to achieve correct operation.
A process (or job) is the fundamental unit of work in an operating system. Process
management includes creating and deleting processes and providing mechanisms for
processes to communicate and synchronize with another. An operating system manages
memory by keeping track of what parts of memory are being used and by whom. The
operating system is also responsible for dynamically allocating and freeing memory
space. Storage space is also managed by the operating system and this includes
providing file systems for representing files and directories and managing space on mass
storage devices.
Operating systems must also be concerned with protecting and securing the
operating system and users. Protection are mechanisms that control the access of
34
processes or users to the resources made available by the computer system. Security
measures are responsible for defending a computer system from external or internal
attacks.
Several data structures that are fundamental to computer science are widely
used in operating systems, including lists, stacks, queues, trees, hash functions, maps,
and bitmaps.
GNU/Linux and BSD UNIX are open-source operating systems. The advantages
of free software and open sourcing are likely to increase the number and quality of open-
source projects, leading to an increase in the number of individuals and companies that
use these projects.
LANs and WANs are the two basic types of networks. LANs enable processors
distributed over a small geographical area to communicate, whereas WANs allow
processors distributed over a larger area to communicate. LANs typically are faster
than WANs.
There are several computer systems that serve specific purposes. These include
real-time operating systems designed for embedded environments such as consumer
devices, automobiles, and robotics. Real-time operating systems have well defined,
fixed time constraints. Processing must be done within the defined constraints, or the
system will fail. Multimedia systems involve the delivery of multimedia data and often
have special requirements of displaying or playing audio, video, or synchronized audio
and video streams.
Recently, the influence of the Internet and the World Wide Web has encouraged
the development of modern operating systems that include web browsers and
networking and communication software as integral features.
35
( http://www.vmware.com) provides a free “player” for Windows on
which hundreds of free “virtual appliances” can run. Virtualbox (
http://www.virtualbox.com) provides a free, opensource virtual
machine manager on many operating systems. Using such tools,
students can try out hundreds of operating systems without
dedicated hardware.
• Additional resources such as Powerpoint presentation and ebook
will be given for further explanation of the module.
• Operating system. http://www.wiley.com/college and clicking
“Who’s my rep?”
• Operating System. http://www.os-book.com.
36
2. The issue of resource utilization shows up in different forms in different
types of operating systems. List what resources must be managed carefully in the
following settings:
a. Mainframe or minicomputer systems
b. Workstations connected to servers
c. Handheld computers
3. Under what circumstances would a user be better off using a time sharing
system rather than a PC or single-user workstation?
POST ASSESSMENT
Write the letters only. Place your answer in the separate pad paper to be submitted.
1. Acts as an intermediary between the computer user and the computer hardware
a. Operating System
b. Application Software
c. Facebook
d. Microsoft Office
2. Software-generated interrupt caused either by an error or a user request
a. Polling
b. Vectored
c. Disabled
d. trap
3. 8 bits equivalent of
a. 2 bytes
b. 1 kilobyte
c. 1 byte
d. 1 word
4. Rigid metal or glass platters covered with magnetics recording materials
a. Register
b. Magnetic disk
c. Magnetic tapes
d. Main memory
5. Logical extension in which CPU switches jobs so frequently that users can interact
with each job while it is running, creating interactive computing
a. Multitasking
b. I/O swapping
c. Virtual memory
d. Swapping
6. Mechanism for controlling access of process or users to resources defined by the OS
a. Privilege
b. Security
c. Escalation
d. Protection
7. Input/Output method once the I/O was interrupt, the I/O return control to the user.
a. Device-status table
b. Synchronous Method
37
c. Asynchronous Method
d. System calls
Directions: Fill each blank with the correct answer. Write your answer in a separate
clean paper for submission.
8. The devices run concurrently with CPU and interrupting CPU when done is
_____________.
9. It contains the addresses of all service routines is called _____________.
10. It indicates the device's type, address, and state such as not functioning, idle, or
busy is _____________.
11. Determines the logical interaction between the device and the computer is called
_____________.
12. Copying information into faster storage system is _____________.
13. Increasing the number of processors which we expect to get more work done in
less time is _____________.
14. It used for high-speed I/O devices able to transmit information at close to memory
speeds is known as ______________.
15. Faster storage of the data and information but classified as volatile is called
________________.
REFERENCES
Silberschatz, A. et. al. (2013). Operating System Concepts. 9th Edition. Hoboken, New
Jersey, USA: John Wiley & Sons, Inc.
Stallings, William (2012). Operating System Internals and Design Principles. 7th edition.
Upper Saddle River, New Jersey: Pearson Education, Inc.
38