Module_01_OS-V1
Module_01_OS-V1
Views of OS:
• User View:
• System Views:
o We can view system as resource allocator i.e. a computer system has many resources that
may be used to solve a problem. The OS acts as a manager of these resources. The OS
must decide how to allocate these resources to programs and the users so that it can
operate the computer system efficiently and fairly.
o A different view of an OS is that it need to control various I/O devices & user programs
i.e. an OS is a control program used to manage the execution of user program to prevent
errors and improper use of the computer.
o Resources can be either CPU Time, memory space, file storage space, I/O devices
and so on.
• For a computer to start running ie. when powered up or rebooted, we need Bootstrap program.
It is stored in ROM or EEPROM. This bootstrap program is called FIRMWARE.
• What this bootstrap does:-
1. Initialization: from CPU registers to device controllers to main memory contents.
2. This bootstrap program must locate and load into memory the Operating System
Kernel, then the OS starts executing the first process called init, and waits for some
event to occur.
3. The occurrence of an event is usually signaled by an interrupt from either the H/W or S/W.
H/W interrupt is signaled by sending a signal to the CPU by system bus. S/W may trigger an
interrupt by executing a special program called system call or monitor program.
Storage Structure
To execute any program, it has to be main memory. The instruction-execution cycle is explained on Von
Neumann architecture. This consists of Instruction Fetch, Decode, Execute and Store back cycle.
I/O Structure
Computer-System Architecture
Roughly classified into TWO types based on number of proicessors
Single-Processor Systems (or Uni)
Multiprocessor Systems
Single-Processor Systems
• one processor present (ie only one CPU doing all work, from fetch to execution)
• range from PDAs through Mainframes
• they are used for some special operations (dedicated)
• they are special-purpose processors that device specific processors, such as disk, keyboard, and
graphics controllers with limited instruction set.
example: PCs contain a microprocessor in the keyboard to convert the keystrokes into codes to
be sent to the CPU.
Multiprocessor Systems
Multiprocessor systems have two or more processors in close communication, sharing the computer bus
and sometimes the clock, memory, and peripheral devices.
Advantage of this is : Increased throughput, Economy of scale, and Increased reliability.
Increased Throughput:-By increasing the Number of processors we can get more work done in less time.
When multiple process co operate on task, a certain amount of overhead is incurred in keeping all parts
working correctly.
Economy Of Scale:-Multi processor system can save more money than multiple single processor, since
they share peripherals, mass storage & power supplies. If many programs operate on same data, they
will be stored on one disk & all processors can share them instead of maintaining data on several
systems.
Increased Reliability:-If a program is distributed properly on several processors, than the failure of one
processor will not halt the system but it only slows down. The ability to continue providing service
proportional to the level of surviving hardware is called Graceful degradation. Some systems go beyond
graceful degradation and are called Fault tolerant, because they still continue to work after suffering
failure of any single component.
Asymmetric multiprocessing: each processor is assigned a specific task. A master processor controls the
system; the other processors either look to the master for instruction or have predefined tasks. This scheme
defines a master-slave relationship. The master processor schedules and allocates work to the slave
processors.
Symmetric multiprocessing (SMP): each processor performs all tasks within the operating system. SMP
means that all processors are peers; no master-slave relationship exists between processors. Figure
1.2illustrates a typical SMP architecture. An example of the SMP system is Solaris, a commercial version
of UNIX designed by Sun Microsystems. A Solaris system can be configured to employ dozens of
processors, all running Solaris. The benefit of this model is that many processes can run simultaneously-
N processes can run if there are N CPUs-without causing a significant deterioration of performance.
However, we must carefully control l/O to ensure that the data reach the appropriate processor. Also, since
the CPUs are separate, one may be sitting idle while another is overloaded, resulting in inefficiencies.
These inefficiencies can be avoided if the processors share certain data structures. A multiprocessor
system of this form will allow processes and resources-such as memory-to be shared dynamically among
the various processors and can lower the variance among the processors. Virtually all modern operating
systems-including Windows, Windows XP, Mac OS x, and Linux-now provide support for SMP.
Clustered Systems
Another type of multiple-CPU system is the clustered system. Like multiprocessor systems, clustered
systems gather together multiple CPUs to accomplish computational work. Clustered systems differ from
multiprocessor systems, however, in that they are composed of two or more individual systems coupled
together. The clustered computers share storage and are closely linked via a local-area network (LAN)or
a faster interconnect such as InfiniBand. Clustering is usually used to provide high-availability service;
that is, service will continue even if one or more systems in the cluster fail. High availability is generally
obtained by adding a level of redundancy in the system. A layer of cluster software runs on the cluster
nodes. Each node can monitor one or more of the others (over the LAN). If the monitored machine fails,
the monitoring machine can take ownership of its storage and restart the applications that were running
on the failed machine. The users and clients of the applications see only a brief interruption of service.
There are two types of clustering systems:
• Asymmetric clustering systems
• Symmetric clustering systems
In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications.
The hot-standby host machine does nothing but monitor the active server. If that server fails, the hot-
standby host becomes the active server.
In symmetric mode, two or more hosts are running applications, and are monitoring each other. This mode
is obviously more efficient, as it uses all of the available hardware. It does require that more than one
application be available to run.
A fault-tolerant system should continue to function, perhaps in a degraded form, when faced with
failures. In such systems communication faults, machine failures (of type fail-stop), storage-device
crashes, and decays of storage media should all be tolerated to some extent.
The idea is as follows: The operating system keeps several jobs in memory simultaneously. This set of
jobs can be a subset of the jobs kept in the job pool-which contains all jobs that enter the system-since the
number of jobs that can be kept simultaneously in memory is usually smaller than the number of jobs that
can be kept in the job pool. The operating system picks and begins to execute one of the jobs in memory.
Eventually, the job may have to wait for some task, such as an I/O operation, to complete.
In a non-multiprogrammed system, the CPU would sit idle. In a multiprogrammed system, the operating
system simply switches to, and executes, another job. When that job needs to wait, the CPU is switched
to another job, and so on. Eventually, the first job finishes waiting and gets the CPU back. As long as at
least one job needs to execute, the CPU is never idle.
Multiprogramming, which allows several jobs to be in memory at the same time, thus ensuring the CPU
always has a job to execute.
Timesharing systems are an extension of multiprogramming whereby CPU scheduling algorithms rapidly
switch between jobs, thus providing the illusion each job is running concurrently.
In time-sharing systems, the CPU executes multiple jobs by switching among them, but the switches occur
so frequently that the users can interact with each program while it is running.
Time sharing requires an interactive (or hands-on) computer system, which provides direct
communication between the user and the system.
The user gives instructions to the operating system or to a program directly, using an input device such as
a keyboard or a mouse, and waits for immediate results on an output device. Accordingly, the response
time should be short-typically less than one second.
A time-shared operating system allows many users to share the computer simultaneously. Since each
action or command in a time-shared system tends to be short, only a little CPU time is needed for each
user. As the system switches rapidly from one user to the next, each user is given the impression that the
entire computer system is dedicated to his use, even though it is being shared among many users. A time-
shared operating system uses CPU scheduling and multiprogramming to provide each user with a small
portion of a time-shared computer. Each user has at least one separate program in memory. A program
loaded into memory and executing is called a process.
Operating-System Operations
Modern OS are Interrupt driven. It will sit idle if no request is present. Events happen on interrupt/
trap signal.
Dual-Mode Operation:
In order to ensure the proper execution of the operating system, we must be able to distinguish between
the execution of operating-system code and user defined code.
In dual mode operation, there are two separate modes: user mode and kernel mode (also called supervisor
mode, system mode, or privileged mode).
A bit, called the mode bit is added to the hardware of the computer to indicate the current mode: kernel
(0) or user (1).
With the mode bit, we are able to distinguish between a task that is executed on behalf of the operating
system and one that is executed on behalf of the user.
When the computer system is executing on behalf of a user application, the system is in user mode.
However, when a user application requests a service from the operating system (via a system call), it must
transition from user to kernel mode to fulfill the request. This is shown below
At system boot time, the hardware starts in kernel mode. The operating system is then loaded and starts
user applications in user mode. Whenever a trap or interrupt occurs, the hardware switches from user
mode to kernel mode (that is, changes the state of the mode bit to 0). Thus, whenever the operating system
gains control of the computer, it is in kernel mode. The system always switches to user mode (by setting
the mode bit to 1) before passing control to a user program.
Advantages:
The dual mode of operation provides us with the means for protecting the operating system from errant
users-and errant users from one another.
We accomplish this protection by designating some of the machine instructions that may cause harm as
privileged instructions. The hardware allows privileged instructions to be executed only in kernel mode.
If an attempt is made to execute a privileged instruction in user mode, the hardware does not execute the
instruction but rather treats it as illegal and traps it to the operating system. The instruction to switch to
user mode is an example of a privileged instruction. Some other examples include I/O control timer
management, and interrupt management.
Timer:
We must ensure that the OS maintains control over the CPU. We must prevent a user program from getting
stuck in an infinite loop or not calling system services and never returning control to the OS.
Time is a fallback mechanism by which the OS reclaims control over machine. Setting timer is a privileged
instruction. It is set to generate an interrupt after a period of time.
When timer expiries, generates an interrupt, it is handled by kernel, which controls resumption context.
Process Management:
A process is a program in execution. A process abstraction is a fundamental OS mechanism for the
management of concurrent program execution.
The OS responds by creating process. Process requires certain resources like CPU time, M/y, I/O devices.
These resources are allocated to the process when it created or while it is running.
When process terminates the process reclaims all the reusable resources.
Process refers to the execution of M/c instructions. A program by itself is not a process but is a passive
entity.
The operating system is responsible for the following activities in connection with process management:
• Creating and deleting both user and system processes
• Suspending and resuming processes
• Providing mechanisms for process synchronization
• Providing mechanisms for process communication
• Providing mechanisms for deadlock handling
Memory Management:
Main Memory is the centre to the operation of the modern computer. Main M/y is the array of bytes
ranging from hundreds of thousands to billions. Each byte will have their own address.
The central processor reads the instruction from main Memory during instruction fetch cycle & it both
reads & writes the data during the data-fetch cycle. The I/O operation reads and writes data in main
Memory. The main Memory is generally a large storage device in which a CPU can address & access
directly.
When a program is to be executed it must be loaded into memory & mapped to absolute address. When it
is executing it access the data & instruction from M/y by generating absolute address. When the program
terminates all available M/y will be returned back.
To improve the utilization of CPU & the response time several program will be kept in Memory.
• Keeping track of which parts of memory are currently being used and by Whom
• Deciding which processes (or parts thereof) and data to move into and out of memor
• Allocating and deallocating memory space as needed
STORAGE MANAGEMENT
File-System Management:
File management is one of the most visible component of an OS. Computer stores data on different types of
physical media like Magnetic Disks, Magnetic tapes, optical disks etc. For convenient use of the computer system
the OS provides uniform logical view of information storage.
The OS maps file on to physical media & access these files via storage devices. A file is logical collection of
information. File consists of both program & data. Data files may be numeric, alphabets or alphanumeric. Files
can be organized into directories.
The OS is responsible for the following activities,
• Creating & deleting of files.
• Creating & deleting directories.
• Supporting primitives for manipulating files & directories.
• Mapping files onto secondary storage.
• Backing up files on stable storage media.
•
Mass-Storage Management:
Is a mechanism where the computer system may store information in a way that it can be retrieved later.
They are used to store both data & programs. The programs & data are stored in main memory.
Since the size of the Memory is small & volatile Secondary storage devices is used. Magnetic disk is
central importance of computer system.
The OS is responsible for the following activities,
• Free space management.
• Storage allocation.
• Disk scheduling. The entire speed of computer system depends on the speed of the disk sub system.
Caching:
Caching is an important principle of computer systems. As all we know all information is kept in some
storage such as main memory, as it is used, it is copied into a faster storage system-the cache-on a
temporary basis. When we need a particular piece of information, we first check whether it is in cache or
not, if it is in the cache we use it directly from the cache, otherwise we use the information from the source
from where it is present and made a copy of that information in the cache assuming that we will need it
again soon. By doing so we decreases the searching time of that particular information.
Multiprocessor environment must provide cache coherency in hardware such that all CPUs have the most
recent value in their cache
Networking:
Networking enables users to share resources & speed up computations. The process communicates with
one another through various communication lines like high-speed buses or Network.
Following parameters are considered while designing the N/w,
• Topology of N/w.
• Type of N/w.
• Physical media.
• Communication protocol,
• Routing algorithms.
Protection and Security: Protection is any mechanism for controlling the access of
processes or users to the resources defined by a computer system. This mechanism must provide means
for specification of the controls to be imposed and means for enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between component
subsystems. Early detection of interface errors can often prevent contamination of a healthy subsystem by
another subsystem that is malfunctioning.
An unprotected resource cannot defend against use (or misuse) by an unauthorized or incompetent user.
A protection-oriented system provides a means to distinguish between authorized and unauthorized usage.
Protection and security require the system to be able to distinguish among all its users. Most operating
systems maintain a list of user names and associated user identifiers (user IDs).
In Windows NT parlance, this is a security ID (SID). These numerical IDs are unique, one per user. When
a user logs in to the system, the authentication stage determines the appropriate user ID for the user. That
user ID is associated with all of the user's processes and threads. When an ID needs to be user readable, it
is translated back to the user name via the user name list.
SPECIAL-PURPOSE SYSTEMS
Real Time Embedded System:
o Embedded computers are found almost everywhere from car engines, robots, alarm systems,
medical imaging systems, industrial control systems, microwave ovens, weapon systems etc.
o This class of computers have very specific task and run an OS with very limited features. Usually
they have limited or no user interface.
Multimedia Systems: Multimedia can be defined as any application that combines text with graphics,
animation, audio, video, and/or virtual reality. A computer system is a combination of equipment
(hardware), processes and programs (software), and people organized to perform a function. Combining
these definitions, a business multimedia system includes equipment, programs, and people organized for
the purposes of communication, data storage and retrieval systems (multimedia databases and electronic
filing systems), information security, and Internet use (Web pages and electronic-business applications).
Within organizations, multimedia systems are used in all forms of information systems from transaction
processing systems to executive decision support systems. These systems also can be found across
industries such as accounting, banking, communications, education, entertainment, insurance,
manufacturing, medical, retailing, and real estate. Anywhere there is a need for combining text, pictures,
sounds, and animation, multimedia systems are found.
Multimedia systems are used for security to keep intruders out of a system and for the protection of stored
documents. Scanning devices are available to scan potential user's eyes (retina imaging) or thumb prints
to gain access to a computer or site. Other systems can scan a person's signature or capture voice pattern
recognition for the same purposes. Stored text, pictures, original document images, sound files, and video
files can be protected through encryption methods, read/write protection, password management, and
copyright protection that keep intruders from copying or accessing sensitive files.
Handheld Systems:
Handheld systems include Personal Digital Assistants(PDAs), such as Palm-Pilots or Cellular
Telephones with connectivity to a network such as the Internet. They are usually of limited size due to
which most handheld devices have a small amount of memory, include slow processors, and feature small
display screens.
• Many handheld devices have between 512 KB and 8 MB of memory. As a result, the operating
system and applications must manage memory efficiently. This includes returning all allocated
memory back to the memory manager once the memory is no longer being used.
• Currently, many handheld devices do not use virtual memory techniques, thus forcing program
developers to work within the confines of limited physical memory.
• Processors for most handheld devices often run at a fraction of the speed of a processor in a PC.
Faster processors require more power. To include a faster processor in a handheld device would
require a larger battery that would have to be replaced more frequently.
• The last issue confronting program designers for handheld devices is the small display screens
typically available. One approach for displaying the content in web pages is web clipping, where
only a small subset of a web page is delivered and displayed on the handheld device.
Some handheld devices may use wireless technology such as BlueTooth, allowing remote access to e-
mail and web browsing. Cellular telephones with connectivity to the Internet fall into this category. Their
use continues to expand as network connections become more available and other options such
as cameras and MP3 players, expand their utility.
Computing Environments
Traditional Computing:
As computing matures, the lines separating many of the traditional computing environments are blurring.
Consider the "typical office environment." Just a few years ago, this environment consisted of PCs
connected to a network, with servers providing file and print services. Remote access was awkward, and
portability was achieved by use of laptop computers. Terminals attached to mainframes were prevalent
at many companies as well, with even fewer remote access and portability options The current trend is
toward providing more ways to access these computing environments. Web technologies are stretching
the boundaries of traditional computing. Companies establish portals, which provide web accessibility to
their internal servers.
Network computers are essentially terminals that understand web-based computing. Handheld
computers can synchronize with PCs to allow very portable use of company information. Handheld PDAs
can also connect to wireless networks to use the company's web portal (as well as the myriad other web
resources). At home, most users had a single computer with a slow modem connection to the office, the
Internet, or both. Today, network-connection speeds once available only at great cost are relatively
inexpensive, giving home users more access to more data. These fast data connections are allowing home
computers to serve up web pages and to run networks that include printers, client PCs, and servers. Some
homes even have firewalls to protect their networks from security breaches. Those firewalls cost
thousands of dollars a few years ago and did not even exist a decade ago. In the latter half of the previous
century, computing resources were scarce. (Before that, they were nonexistent!)
For a period of time, systems were either batch or interactive. Batch system processed jobs in bulk, with
predetermined input (from files or other sources of data). Interactive systems waited for input from users.
To optimize the use of the computing resources, multiple users shared time on these systems. Time-
sharing systems used a timer and scheduling algorithms to rapidly cycle processes through the CPU,
giving each user a share of the resources. Today, traditional time-sharing systems are uncommon. The
same scheduling technique is still in use on workstations and servers, but frequently the processes are all
owned by the same user (or a single user and the operating system). User processes, and system processes
that provide services to the user, are managed so that each frequently gets a slice of computer time.
Consider the windows created while a user is working on a PC, for example, and the fact that they may
be performing different tasks at the same time.
Client-Server Computing:
Since PC’s are faster, power full, cheaper etc. designers have shifted away from the centralized system
architecture. User-interface functionality that used to be handled by centralized system is handled by
PC’s. So the centralized system today act as server program to satisfy the requests of client.
Server system can be classified as follows
a. Computer-Server System:-Provides an interface to which client can send requests to perform some
actions, in response to which they execute the action and send back result to the client.
b. File-Server Systems:-Provides a file system interface where clients can create, update, read & delete
files.
Peer-to-Peer Computing:
PC’s are introduced in 1970’s they are considered as standalone computers i.e. only one user can use it at
a time.
With wide spread use of internet PC’s were connected to computer networks. With the introduction of the
web in mid 1990’s N/w connectivity became an essential component of a computer system.
All modern PC’s & workstation can run a web. OS also includes system software that enables the
computer to access the web.
In distributed systems or loosely coupled couple systems, the processor can communicate with one
another through various communication lines like high speed buses or telephones lines.
A N/w OS which has taken the concept of N/w & distributed system which provides features fir file
sharing across the N/w and also provides communication which allows different processors on different
computers to share resources.
Web-Based Computing
Web Based Computing has increased the emphasis on N/w. Web based computing uses PC, handheld
PDA & cell phones. One of the feature of this type is load balancing. In load balancing, N/w connection
is distributed among a pool of similar servers.
• Web has become ubiquitous
• PCs most prevalent devices
• More devices becoming networked to allow web access
• New category of devices to manage web traffic among similar servers: load balancers
• Use of operating systems like Windows 95, client-side, have evolved into Linux and Windows
XP, which can be clients and servers
Distributed Systems:
A distributed system is one in which H/w or S/w components located at the networked computers
communicate & co ordinate their actions only by passing messages.
A distributed systems looks to its user like an ordinary OS but runs on multiple, Independent CPU’s.
Distributed systems depends on networking for their functionality which allows for communication so
that distributed systems are able to share computational tasks and provides rich set of features to users.
N/w may vary by the protocols used, distance between nodes & transport media. Protocols->TCP/IP,
ATM etc. Network-> LAN, MAN, WAN etc. Transport Media-> copper wires, optical fibers & wireless
transmissions
Figure 2.1
o The first input that the program will need is the names of two files which can be
specified in many ways. This sequence requires many I/O system calls.
o Next, the program must open the input file which requires another system call. If
opening of file fails, it should display error message on console (another system call)
and should terminate abnormally (another system call).
o Next, the program must create the output file (another system call), If fails, it should
display error message on console (another system call) and should also abort (another
system call).
o Next, we enter a loop that reads from input file (system call) and writes to the output
file (system call).Write/read operation may fail, which needs another system call to
continue.
o Finally, after the entire file is copied, the program may close both files (system call),
write message to console (system call)), and terminate normally (system call).
• Application developers design programs according to an Application Program Interface
(API). API specifies the set of functions that are available to an application programmer
including the parameter that are passed to each function and returns values the programmer
can expect.
• Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based
systems (UNIX, Linux, and Mac OS X), and Java API for the Java virtual machine (JVM).
• The runtime support system(a set of functions built into libraries included with a compiler) for
most programming languages provides a system call interface that serves as the link to system
calls made available by the OS.
• The system call interface intercepts function call in the API and invokes the necessary system
call within the OS.
• A number is associated with each system call and the system-call interface maintains a table
indexed according to these numbers.
• The system call interface invokes intended system call in OS kernel and returns status of the
system call and any return values.
• The caller needs to know nothing about how the system call is implemented or what it does
during execution.
• The below figure 2.2 illustrates how the OS handles a user application which is invoking open
() system call.
Figure 2.2
Three general methods are used to pass the parameters to the OS.
1. The simplest approach is to pass the parameters in registers.
2. In some cases there can be more parameters than registers. In these cases the parameters
are stored in a block or table in memory and the address of the block is passed as a
parameter in register. It is shown in below figure 2.3.This approach is used by Linux and
Solaris.
Figure 2.3
3. Parameters can also be placed or pushed onto stack by the program &popped off the
stack by the OS.
Some OS prefer the block or stack methods, because those approaches do not limit the number or
length of parameters being passed.
Example:
MS-DOS
• MS-DOS is an example of single tasking system, which has command interpreter system that is
invoked when the computer is started as shown in figure 2.4 (a).
• To run a program MS-DOS uses simple method. It does not create a newprocess when one process
is running.
• It loads the program into memory and gives the program as much memory as possible as shown in
figure 2.4 (b).
FreeBSD
• Free BSD is an example of multitasking system.
• In free BSD the command interpreter may continue running while other program is executed as
shown in below figure 2.5.
• fork () is a system call used to create new process.
• Then, the selected program is loaded into memory via an exec () system call and then program is
executed.
2. File management
• create file, delete file
o System calls can be used to create & delete files. System calls may require the name of the
files and attributes for creating & deleting of files.
• open, close file
o Opens the file for usage and finally we need to close the file.
• read, write, reposition
o Other operation may involve the reading of the file, write & reposition the file after it is
opened.
• get and set file attributes
o For directories, some set of operation are to be performed. Sometimes it is required to reset
some of the attributes on files & directories. The system call get file attribute &set file
attribute are used for this type of operation.
3. Device management
• request device, release device
• read, write, reposition
• get device attributes, set device attributes
• logically attach or detach devices
o The system calls are also used for accessing devices.
o Many of the system calls used for files are also used for devices.
o A system with multiple users may require us to first request the device, to ensure exclusive
use of it.
o After using the device, it must be released using release system call. These functions are
similar to open & close system calls of files.
o Read, write & reposition system calls may be used with devices.
o MS-DOS & UNIX merge the I/O devices & the files to form file-device structure.
4. Information maintenance
• get time or date, set time or date
• get system data, set system data
• get and set process, file, or device attributes
o Many system calls exist for the purpose of transferring information between the user
program and the operating system.
o For example, most systems have a system call to return the current time and date.
o Other system calls may return information about the system, such as the number of current
users, the version number of the operating system, the amount of free memory or disk
space, and so on.
o The operating system also keeps information about all its processes, and system calls are
used to access this information.
o System calls are also used to reset the process information (get process attributes and set
process attributes).
5. Communications
• create, delete communication connection
• send, receive messages
• transfer status information
• attach and detach remote devices
There are two models of inter-process communication:
➢ Message Passing model
• In message passing model, the communicating processes exchange messages with one
another to transfer information. Messages can be exchanged between the processes either
directly or indirectly through a common mailbox.
• Each computer in a network will have a host name. Similarly, each process has a process
name, and this name is translated into an identifier by which the operating system can refer
to the process. The get hostid and get processid system calls do this translation.
• The identifiers are then passed to the general purpose open and close calls provided by the
file system or to specific open connection and close connection system calls, depending
on the system's model of communication.
• The recipient process must give its permission for communication to take place with an
accept connection system call.
• The receiving daemons execute a wait for connection call and are awakened when a
connection is made.
• The source of the communication, known as the client, and the receiving daemon, known
as a server, exchange messages by using read message and write message system calls.
• The close connection call terminates the communication.
6. Protection
• Protection provides a mechanism for controlling access to the resources provided by a
computer system
• Historically protection was a concern only on multi-programmed computer systems with
several users.
• Nowadays all computer systems from servers to PDA must be concerned with protection.
• System calls providing protection include set permission and get permission, which
manipulate the permission settings of resources such as files and disks.
• The allow user and deny user system calls specify whether particular user or cannot be
allowed access to certain resources.
Figure 2.6
o In MS-DOS, the interfaces and levels of functionality are not well separated.
o For instance, application programs are able to access the basic I/O routines to write
directly to the display and disk drives.
o UNIX is another example for simple structure. Initially it was limited by hardware
functions.
➢ It consists of two separable parts: the kernel and the system programs.
➢ The kernel is further separated into series of interfaces & device drivers.
➢ We can view the traditional UNIX operating system as being layered, as shown in
below figure 2.7.
Figure 2.7
o Everything below the system-call interface and above the physical hardware is the
kernel.
o Kernel provides the file system, CPU scheduling, memory management, and other
operating-system functions through system calls.
o This monolithic structure was difficult to implement and maintain.
• Layered Approach
o A system can be made modular in many ways. One method is the layered approach
in which the OS is divided into number of layers, where one layer is built on the top of
another layer.
o The bottom layer (layer 0) is hardware and higher layer (layer N) is the user interface.
This layering structure is depicted in below figure 2.8.
Figure 2.8
o An OS is an implementation of abstract object made up of data & operations that
manipulate these data.
o A typical operating-system layer say layer M consists of data structures and a set of routines
that can be invoked by higher-level layers. Layer M, in turn can invoke operations on lower
level layers.
o The main advantage of layered approach is the simplicity i.e. each layer uses the services
& functions provided by the lower layer. This approach simplifies the debugging &
verification. Once first layer is debugged the correct functionality is guaranteed while
BGSCET 2023-24 Page 30 of 37
Operating Systems-BCS303
debugging the second layer. If an error is identified then it is a problem in that layer because
the layer below is already debugged.
o Each layer tries to hide some data structures, operations & hardware from the higher level
layers.
o A problem with layered implementation is that they are less efficient.
• Micro Kernels
o In the mid-1980s, researchers at Carnegie Mellon University developed an operating system
called Mach that modularized the kernel using the operating system by removing all
nonessential components from the kernel and implementing them as system and user level
programs. The result is a smaller kernel.
o The main function of the micro kernels is to provide communication facilities between the
client program and various services that are running in user space.
o This approach provided a high degree of flexibility and modularity.
o It includes the ease of extending OS. All the new services are added to the user space & do
not need the modification of kernel.
o This approach also provides more security &reliability.
o Most of the services will be running as user process rather than the kernel process.
o A micro kernel in Windows NT provides portability and modularity.
• Modules The best current methodology for operating-system design involves using object-
oriented programming techniques to create a modular kernel.
o Here, the kernel has a set of core components and links in additional services either during
boot time or during run time. Such a strategy uses dynamically loadable modules and is
common in modern implementations of UNIX, such as Solaris, Linux, and Mac OS X.
o For example, the Solaris operating system structure, shown in the below figure 2.9, is
organized around a core kernel with seven types of loadable kernel modules:
1.Scheduling classes
2.File systems
3.Loadable system calls
4.Executable formats
5.STREAMS modules
6.Miscellaneous
7.Device and bus drivers
Figure 2.9
o The Apple Macintosh Mac OS X operating system uses a hybrid structure. It is a layered
system in which one layer consists of the Mach microkernel. The structure of Mac OS X
appears as shown in below figure 2.10.
Figure 2.10
o The top layers include application environments and a set of services providing a graphical
interface to applications.
o Below these layers is the kernel environment, which consists primarily of the Mach
microkernel and the BSD kernel.
o Mach provides memory management, support for remote procedure calls (RPCs) and
interprocess communication facilities, including message passing and thread scheduling.
The BSD component provides a BSD command line interface, support for networking and file systems,
and an implementation of POSIX APIs, including Pthreads.
• A major difficulty with the virtual machine approach involves disk systems. Suppose that the
physical machine had three disk drives but wanted to support seven virtual machines. Clearly, it
could not allocate a disk drive to each virtual machine, because the virtual machine software itself
will need substantial disk space to provide virtual memory and spooling. The solution is to provide
virtual disks-termed mini disks in IBM's VM operating system which are identical in all respects
except size.
• Implementation
o It is difficult to implement VM concept. Much work is required to provide an exact duplicate
of the underlying machine.
o The machine typically has two modes: user mode and kernel mode.
o The virtual-machine software can run in kernel mode, since it is the operating system. The
virtual machine itself can execute in only user mode.
o The major difference between virtual and non virtual m/c is time. The real I/O might have
taken 100 milliseconds, the virtual I/O might take less time (because it is spooled) or more
time (because it is interpreted). In addition, the CPU is being multi programmed among many
virtual machines, further slowing down the virtual machines in unpredictable ways.
BGSCET 2023-24 Page 33 of 37
Operating Systems-BCS303
• Benefits
o The virtual-machine concept provides complete protection of system resources since each
virtual machine is isolated from all other virtual machines. This isolation permits no direct
sharing of resources.
o A virtual-machine system is a perfect vehicle for operating-systems research and
development.
o System programmers are given their own VM, and system development is done on the virtual
machine instead on a physical machine. Thus changing OS will not cause any problem.
Examples
1. VMware
• It is a popular commercial application that abstracts Intel X86 and compatible hardware into
isolated virtual machines.
• It runs as an application on a host operating system such as Windows or Linux and allows this
host system to concurrently run several different guest operating systems as independent
virtual machines.
• The architecture is shown below figure 2.13.
• The class loader loads the compiled .class files from both the Java program and the Java API
for execution by the Java interpreter.
• After a class is loaded, the verifier checks that the .class file is valid Java byte code and does
not overflow or underflow the stack. It also ensures that the byte code does not perform pointer
arithmetic, which could provide illegal memory access.
• If the class passes verification, it is run by the Java interpreter.
• The JVM also automatically manages memory by performing garbage collection -the
practice of reclaiming memory from objects no longer in use and returning it to the system.
The JVM may be implemented in software on top of a host operating system, such as Windows, Linux,
or Mac OS X, or as part of a Web browser.
It is possible to design, code, and implement an operating system specifically for one machine at one site.
More commonly, however, operating systems are designed to run on any of a class of machines at a
variety of sites with a variety of peripheral configurations. The system must then be configured or
generated for each specific computer site, a process sometimes known as system generation (SYSGEN).
The operating system is normally distributed on disk or CD-ROM. To generate a system, we use a
special program. The SYSGEN program reads from a given file, or asks the operator of the system for
information concerning the specific configuration of the hardware system, or probes the hardware
directly to determine what components are there. The following kinds of information must be determined.
• What CPU is to be used? What options (extended instruction sets, floating point arithmetic, and
so on) are installed? For multiple CPU systems, each CPU must be described.
• How much memory is available? Some systems will determine this value themselves by
referencing memory location after memory location until an "illegal address" fault is generated.
This procedure defines the final legal address and hence the amount of available memory.
• What devices are available? The system will need to know how to address each device (the
device number), the device interrupt number, the device's type and model, and any special device
characteristics.
• What operating-system options are desired, or what parameter values are to be
used? These options or values might include how many buffers of which sizes should be used,
what type of CPU-scheduling algorithm is desired, what the maximum number of processes to be
supported is, and so on. Once this information is determined, it can be used in several ways.
At one extreme, a system administrator can use it to modify a copy of the source code of the operating
system. The operating system then is completely compiled. Data declarations, initializations, and
constants, along with conditional compilation, produce an output object version of the operating system
that is tailored to the system described. At a slightly less tailored level, the system description can cause
the creation of tables and the selection of modules from a precompiled library.
These modules are linked together to form the generated operating system. Selection allows the library
to contain the device drivers for all supported I/O devices, but only those needed are linked into the
operating system. Because, the system is not recompiled, system generation is faster, but the resulting
system may be overly general.
At the other extreme, it is possible to construct a system that is completely table driven. All the code is
always part of the system, and selection occurs at execution time, rather than at compile or link time.
System generation involves simply creating the appropriate tables to describe the system.
The major differences among these approaches are the size and generality of the generated system and
the ease of modification as the hardware configuration changes. Consider the cost of modifying the
system to support a newly acquired graphics terminal or another disk drive. Balanced against that cost,
of course, is the frequency (or infrequency) of such changes.