0% found this document useful (0 votes)
3 views

OS-unit-1

The lecture notes on Operating Systems cover an overview of operating system functions, process management, memory management, and system protection and security. It discusses various concepts such as process scheduling, inter-process communication, deadlocks, file systems, and the structure of operating systems including layered and microkernel approaches. Additionally, it addresses the importance of protection and security mechanisms in managing access to system resources and ensuring system reliability.

Uploaded by

gomo5994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

OS-unit-1

The lecture notes on Operating Systems cover an overview of operating system functions, process management, memory management, and system protection and security. It discusses various concepts such as process scheduling, inter-process communication, deadlocks, file systems, and the structure of operating systems including layered and microkernel approaches. Additionally, it addresses the importance of protection and security mechanisms in managing access to system resources and ensuring system reliability.

Uploaded by

gomo5994
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

LECTURE NOTES

ON
OPERATING SYSTEMS
(20A05402T)

II B. Tech II Semester (JNTUA-R20)

Department of Computer Science and Engineering

BY
SHAIK RAZIA
ASSISTANT PROFESSOR
GATES INSTITUTE OF TECHNOLOGY
JAWAHARLAL NEHRU TECHNOLOGICAL UNIVERSITY ANANTAPUR

19A05403T OPERATING SYSTEMS

UNIT I
Operating Systems Overview: Introduction, Operating system functions, Operating systems operations, Computing
environments, Open-Source Operating Systems
System Structures: Operating System Services, User and Operating-System Interface, systems calls, Types of System
Calls, system programs, Operating system Design and Implementation, Operating system structure, Operating system
debugging, System Boot.

UNIT II
Process Concept: Process scheduling, Operations on processes, Inter-process communication, Communication in client
server systems.
Multithreaded Programming: Multithreading models, Thread libraries, Threading issues, Examples.
Process Scheduling: Basic concepts, Scheduling criteria, Scheduling algorithms, Multiple processor scheduling, Thread
scheduling, Examples.
Inter-process Communication: Race conditions, Critical Regions, Mutual exclusion with busy waiting, Sleep and
wakeup, Semaphores, Mutexes, Monitors, Message passing, Barriers, Classical IPC Problems - Dining philosophers
problem, Readers and writers problem.

UNIT III
Memory-Management Strategies: Introduction, Swapping, Contiguous memory allocation, Paging, Segmentation,
Examples.
Virtual Memory Management: Introduction, Demand paging, Copy on-write, Page replacement, Frame allocation,
Thrashing, Memory-mapped files, Kernel memory allocation, Examples.

UNIT IV
Deadlocks: Resources, Conditions for resource deadlocks, Ostrich algorithm, Deadlock detection And recovery,
Deadlock avoidance, Deadlock prevention.
File Systems: Files, Directories, File system implementation, management and optimization.
Secondary-Storage Structure: Overview of disk structure, and attachment, Disk scheduling, RAID structure, Stable
storage implementation.

UNIT V
System Protection: Goals of protection, Principles and domain of protection, Access matrix, Access control, Revocation
of access rights.
System Security: Introduction, Program threats, System and network threats, Cryptography as a security, User
authentication, implementing security defenses, firewalling to protect systems and networks, Computer security
classification.
Case Studies: Linux, Microsoft Windows.
UNIT-I
OPERATING SYSTEMS OVERVIEW
INTRODUCTION:

What is an Operating System?


A program that acts as an intermediary between a user of a computer and the computer hardware
Operating system goals:
 Execute user programs and make solving user problems easier
 Make the computer system convenient to use
 Use the computer hardware in an efficient manner

Operating systems functions

What is an Operating System?


A program that acts as an intermediary between a user of a computer and the computer hardware
Operating system goals:
 Execute user programs and make solving user problems easier
 Make the computer system convenient to use
 Use the computer hardware in an efficient manner

Computer System Structure


Computer system can be divided into four components
 Hardware – provides basic computing resources CPU, memory, I/O devices
 Operating system-Controls and coordinates use of hardware among various applications
and users
 Application programs – define the ways in which the system resources are used to solve
the computing problems of the users
-Word processors, compilers, web browsers, database systems,
video games
 Users
- People, machines, other computers
Four Components of a Computer System

Process Management
 A process is a program in execution. It is a unit of work within the system. Program is a
passive entity, process is an active entity.
 Process needs resources to accomplish its task
 CPU, memory, I/O, files
 Initialization data
 Process termination requires reclaim of any reusable resources
 Single-threaded process has one program counter specifying location of next instruction
to execute
 Process executes instructions sequentially, one at a time, until completion

 Multi-threaded process has one program counter per thread


 Typically system has many processes, some user, some operating system running
concurrently on one or more CPUs
 Concurrency by multiplexing the CPUs among the processes / threads
Process Management Activities
 The operating system is responsible for the following activities in connection with
process management:
 Creating and deleting both user and system processes
 Suspending and resuming processes
 Providing mechanisms for process synchronization
 Providing mechanisms for process communication
 Providing mechanisms for deadlock handling
Memory Management
 All data in memory before and after processing
 All instructions in memory in order to execute
 Memory management determines what is in memory when
 Optimizing CPU utilization and computer response to users
 Memory management activities
 Keeping track of which parts of memory are currently being used and by whom
 Deciding which processes (or parts thereof) and data to move into and out of memory
 Allocating and deallocating memory space as needed
Storage Management
 OS provides uniform, logical view of information storage
 Abstracts physical properties to logical storage unit - file
 Each medium is controlled by device (i.e., disk drive, tape drive)
 Varying properties include access speed, capacity, data-transfer rate, access method
(sequential or random)
 File-System management
 Files usually organized into directories
 Access control on most systems to determine who can access what

OS activities include

 Creating and deleting files and directories


 Primitives to manipulate files and dirs
 Mapping files onto secondary storage
 Backup files onto stable (non-volatile) storage media

Mass-Storage Management
 Usually disks used to store data that does not fit in main memory or data that must be kept
for a “long” period of time
 Proper management is of central importance
 Entire speed of computer operation hinges on disk subsystem and its algorithms
MASS STORAGE activities
 Free-space management
 Storage allocation
 Disk scheduling
 Some storage need not be fast
 Tertiary storage includes optical storage, magnetic tape
 Still must be managed

 Varies between WORM (write-once, read-many-times) and RW (read-write)

Operating-System Structure:

Simple Structure
Many commercial systems do not have well-defined structures. Frequently, such operating
systems started as small, simple, and limited systems and then grew beyond their original scope.
MS-DOS is an example of such a system.

It was written to provide the most functionality in the least space, so it was not divided into
modules carefully. In MS-DOS, the interfaces and levels of functionality are not well separated.
For instance, application programs are able to access the basic I/O routines to write directly to the
display and disk drives. Such freedom leaves MS-DOS vulnerable to errant (or malicious)
programs, causing entire system crashes when user programs fail. Of course, MS-DOS was also
limited by the hardware of its era. Another example of limited structuring is the original UNIX
operating system. UNIX is another system that initially was limited by hardware functionality.
It consists of two separable parts: the kernel and the system programs. The kernel is further
separated into a series of interfaces and device drivers, which have been added and expanded over
the years as UNIX has evolved.
Layered Approach:

The operating system can then retain much greater control over the computer and over the
applications that make use of that computer. Implementers have more freedom in changing the
inner workings of the system and in creating modular operating systems. Under the top down
approach, the overall functionality and features are determined and are separated into
components. Information hiding is also important, because it leaves programmers free to
implement the low-level routines as they see fit, provided that the external interface of the routine
stays unchanged and that the routine itself performs the advertised task.
A system can be made modular in many ways. One method is the layered approach, in
which the operating system is broken up into a number of layers (levels). The bottom layer (layer
0) is the hardware; the highest (layer N) is the user interface.

An operating-system layer is an implementation of an abstract object made up of data and the


operations that can manipulate those data. A typical operating-system layer—say, layer M—
consists of data structures and a set of routines that can be invoked by higher-level layers. Layer
M, in turn, can invoke operations on lower-level layers.
The main advantage of the layered approach is simplicity of construction and debugging.
The layers are selected so that each uses functions (operations) and services of only lower-level
layers. This approach simplifies debugging and system verification. The first layer can be
debugged without any concern for the rest of the system, because, by definition, it uses only the
basic hardware (which is assumed correct) to implement its functions. Once the first layer is
debugged, its correct functioning can be assumed while the second layer is debugged, and so on.
If an error is found during the debugging of a particular layer, the error must be on that layer,
because the layers below it are already debugged. Thus, the design and implementation of the
system is simplified.
Each layer is implemented with only those operations provided by lower level layers. A
layer does not need to know how these operations are implemented; it needs to know only what
these operations do. Hence, each layer hides the existence of certain data structures, operations,
and hardware from higher-level layers.

The major difficulty with the layered approach involves appropriately defining the various
layers. The backing-store driver would normally be above the CPU scheduler, because the driver
may need to wait for I/O and the CPU can be rescheduled during this time. A final problem with
layered implementations is that they tend to be less efficient than other types. For instance, when
a user program executes an I/O operation, it executes a system call that is trapped to the I/O layer,
which calls the memory-management layer, which in turn calls the CPU-scheduling layer, which
is then passed to the hardware.
Micro kernels
The kernel became large and difficult to manage. In the mid-1980s, researchers at Carnegie
Mellon University developed an operating system called Mach that modularized the kernel using
the microkernel approach. This method structures the operating system by removing all
nonessential components from the kernel and implementing them as system and user-level
programs. The result is a smaller kernel. microkernels provide minimal process and memory
management, in addition to a communication facility.
The main function of the microkernel is to provide a communication facility between the
client program and the various services that are also running in user space. One benefit of the
microkernel approach is ease of extending the operating system. All new services are added to
user space and consequently do not require modification of the kernel. When the kernel does have
to be modified, the changes tend to be fewer, because the microkernel is a smaller kernel.
The resulting operating system is easier to port from one hardware design to another. The
microkernel also provides more security and reliability, since most services are running as user
rather than kernel processes. If a service fails, the rest of the operating system remains untouched.

Modules
The best current methodology for operating-system design involves using object-oriented
programming techniques to create a modular kernel. Here, the kernel has a set of core components
and dynamically links in additional services either during boot time or during run time. Such a
strategy uses dynamically loadable modules and is common in modern implementations of UNIX,
such as Solaris, Linux, and Mac OS X.
A core kernel with seven types of loadable kernel modules:

1. Scheduling classes
2. File systems
3. Loadable system calls
4. Executable formats
5. STREAMS modules
6. Miscellaneous
7. Device and bus drivers

Such a design allows the kernel to provide core services yet also allows certain features to be
implemented dynamically. The overall result resembles a layered system in that each kernel
section has defined, protected interfaces; but it is more flexible than a layered system in that any
module can call any other module. The approach is like the microkernel approach in that the
primary module has only core functions and knowledge of how to load and communicate with
other modules; but it is more efficient, because modules do not need to invoke message passing in
order to communicate.
The Apple Macintosh Mac OS X operating system uses a hybrid structure. Mac OS X (also
known as Danvin) structures the operating system using a layered technique where one layer
consists of the Mach microkernel. The top layers include application environments and a set of
services providing a graphical interface to applications. Below these layers is the kernel
environment, which consists primarily of the Mach microkernel and the BSD kernel. Mach
provides memory management; support for remote procedure calls (RPCs) and inter process
communication (IPC) facilities, including message passing; and thread scheduling. The BSD
component provides a BSD command line interface, support for networking and file systems, and
an implementation of POSIX APIs, including Pthreads.

Operating-System Operations
1. Modern operating systems are interrupt driven. If there are no processes to execute, no
I/O devices to service, and no users to whom to respond, an operating system will sit
quietly, waiting for something to happen. Events are almost always signaled by the
occurrence of an interrupt or a trap
2. A trap (or an exception) is a software-generated interrupt caused either by an error or by
a specific request from a user program that an operating-system service be performed.

3. The interrupt-driven nature of an operating system defines that system's general structure.
For each type of interrupt, separate segments of code in the operating system determine
what action should be taken. An interrupt service routine is provided that is responsible for
dealing with the interrupt.
4. The operating system and the users share the hardware and software resources of the
computer system, we need to make sure that an error in a user program could cause
problems only for the one program that was running. With sharing, many processes could
be adversely affected by a bug in one program. For example, if a process gets stuck in an
infinite loop, this loop could prevent the correct operation of many other processes.
5. Without protection against these sorts of errors, either the computer must execute only one
process at a time or all output must be suspect.
Dual-Mode Operation
Dual-mode operation allows OS to protect itself and other system components
User mode and kernel mode
Mode bit provided by hardware Provides ability to distinguish when system is running
user code or kernel code Some instructions designated as privileged, only executable in
kernel mode System call changes mode to kernel, return from call resets it to user
Transition from User to Kernel Mode
 Timer to prevent infinite loop / process hogging resources Set interrupt after specific
period
 Operating system decrements counter
 When counter zero generate an interrupt
 Set up before scheduling process to regain control or terminate program that exceeds
allotted time

Protection and Security


If a computer system has multiple users and allows the concurrent execution of multiple
processes, then access to data must be regulated. For that purpose, mechanisms ensure that files,
memory segments, CPU, and other resources can be operated on by only those processes that
have gained proper authorization from the operating system.
Protection and security

Protection is any mechanism for controlling the access of processes or users to the resources
defined by a computer system. This mechanism must provide means for specification of the
controls to be imposed and means for enforcement.
Protection can improve reliability by detecting latent errors at the interfaces between
component subsystems. Early detection of interface errors can often prevent contamination of a
healthy subsystem by another subsystem that is malfunctioning. An unprotected resource cannot
defend against use (or misuse) by an unauthorized or incompetent user. A protection-oriented

system provides a means to distinguish between authorized and unauthorized usage, A system can
have adequate protection but still be prone to failure and allow inappropriate access.

It is the job of security to defend a system from external and internal attacks. Such attacks
spread across a huge range and include viruses and worms, denial-of service attacks Protection
and security require the system to be able to distinguish among all its users. Most operating
systems maintain a list of user names and associated user identifiers (user IDs).

 User ID then associated with all files, processes of that user to determine access control
 Group identifier (group ID) allows set of users to be defined and controls managed, then
also associated with each process, file Privilege escalation allows user to change to
effective ID with more rights

Kernel Data Structures


The operating system must keep a lot of information about the current state of the system.
As things happen within the system these data structures must be changed to reflect the current
reality. For example, a new process might be created when a user logs onto the system. The
kernel must create a data structure representing the new process and link it with the data structures
representing all of the other processes in the system.
Mostly these data structures exist in physical memory and are accessible only by the
kernel and its subsystems. Data structures contain data and pointers, addresses of other data
structures, or the addresses of routines. Taken all together, the data structures used by the Linux
kernel can look very confusing. Every data structure has a purpose and although some are used by
several kernel subsystems, they are more simple than they appear at first sight.
Understanding the Linux kernel hinges on understanding its data structures and the use
that the various functions within the Linux kernel makes of them. This section bases its
description of the Linux kernel on its data structures. It talks about each kernel subsystem in terms
of its algorithms, which are its methods of getting things done, and their usage of the kernel's data
structures.

Computing Environments:

Traditional Computing

As computing matures, the lines separating many of the traditional computing environments are
blurring. this environment consisted of PCs connected to a network, with servers providing file
and print services. Terminals attached to mainframes were prevalent at many companies as well,
with even fewer remote access and portability options.
The current trend is toward providing more ways to access these computing environments.
Web technologies are stretching the boundaries of traditional computing. Companies establish
portals, which provide web accessibility to their internal servers. Network computers are
essentially terminals that understand web-based computing. Handheld computers can synchronize
with PCs to allow very portable use of company information. Handheld PDAs can also connect to
wireless networks to use the company's web portal.
Batch system processed jobs in bulk, with predetermined input. Interactive systems waited
for input from users. To optimize the use of the computing resources, multiple users shared time
on these systems. Time-sharing systems used a timer and scheduling algorithms to rapidly cycle
processes through the CPU, giving each user a share of the resources.

Client-Server Computing

Designers have shifted away from centralized system architecture. Terminals connected to
centralized systems are now being supplanted by PCs. Correspondingly, user interface
functionality once handled directly by the centralized systems is increasingly being handled by
the PCs. As a result, many of today’s systems act as server systems to satisfy requests generated
by client systems Server systems can be broadly categorized as compute servers and file servers:
• The compute-server system provides an interface to which a client can send a request to
perform an action (for example, read data); in response, the server executes the action and sends
back results to the client. A server running a database that responds to client requests for data is
an example of such a system.

The file-server system provides a file-system interface where clients can create, update, read, and
delete files. An example of such a system is a web server that delivers files to clients running web
browsers.

Peer-to-Peer Computing

In this model, clients and servers are not distinguished from one another; instead, all nodes
within the system are considered peers, and each may act as either a client or a server, depending
on whether it is requesting or providing a service. Peer-to-peer systems offer an advantage over
traditional client-server systems. In a client-server system, the server is a bottleneck; but in a peer-
to-peer system, services can be provided by several nodes distributed throughout the network.
To participate in a peer-to-peer system, a node must first join the network of peers. Once a
node has joined the network, it can begin providing services to—and requesting services from—
other nodes in the network.
Determining what services are available is accomplished in one of two general ways:
• When a node joins a network, it registers its service with a centralized lookup service on
the network. Any node desiring a specific service first contacts this centralized lookup service to
determine which node provides the service. The remainder of the communication takes place
between the client and the service provider.
• A peer acting as a client must first discover what node provides a desired service by
broadcasting a request for the service to all other nodes in the network. The node (or nodes)
providing that service responds to the peer making the request. To support this approach, a
discovery protocol must be provided that allows peers to discover services provided by other
peers in the network.
Web-Based Computing

The Web has become ubiquitous, leading to more access by a wider variety of devices than was
dreamt of a few years ago. Web computing has increased the emphasis on networking. Devices
that were not previously networked now include wired or wireless access. Devices that were
networked now have faster network connectivity, provided by either improved networking
technology, optimized network implementation code, or both.
The implementation of web-based computing has given rise to new categories of devices,
such as load balancers, which distribute network connections among a pool of similar servers.

Operating systems like Windows 95, which acted as web clients, have evolved into Linux and
Windows XP, which can act as web servers as well as clients. Generally, the Web has increased
the complexity of devices, because their users require them to be web-enabled.
Open-Source Operating Systems
 Operating systems made available in source-code format rather than just binary closed-
source
 Counter to the copy protection and Digital Rights Management (DRM) movement
 Started by Free Software Foundation (FSF), which has “copyleft” GNU Public License
(GPL)
 Examples include GNU/Linux, BSD UNIX (including core of Mac OS X), and Sun
Solaris

OPERATING SYSTEM STRUCTURE


Operating System Services
 One set of operating-system services provides functions that are helpful to the user
 Communications – Processes may exchange information, on the same computer or
between computers over a network.
 Communications may be via shared memory or through message passing (packets moved
by the OS)
 Error detection – OS needs to be constantly aware of possible errors may occur in the
CPU and memory hardware, in I/O devices, in user program
 For each type of error, OS should take the appropriate action to ensure correct and
consistent computing.
 Debugging facilities can greatly enhance the user’s and programmer’s abilities to
efficiently use the system.
 Another set of OS functions exists for ensuring the efficient operation of the system itself
via resource sharing
 Resource allocation - When multiple users or multiple jobs running concurrently,
resources must be allocated to each of them
 Many types of resources - Some (such as CPU cycles, main memory, and file storage)
may have special allocation code, others (such as I/O devices) may have general request
and release code
 Accounting - To keep track of which users use how much and what kinds of computer
resources
 Protection and security - The owners of information stored in a multiuser or networked
computer system may want to control use of that information, concurrent processes should
not interfere with each other.
 Protection involves ensuring that all access to system resources is controlled.
 Security of the system from outsiders requires user authentication, extends to defending
external I/O devices from invalid access attempts.
 If a system is to be protected and secure, precautions must be instituted throughout it. A
chain is only as strong as its weakest link.
User and Operating System Interface - CLI
 Command Line Interface (CLI) or command interpreter allows direct command
entrySometimes implemented in kernel, sometimes by systems program
o Sometimes multiple flavors implemented – shells
o Primarily fetches a command from user and executes it
 Sometimes commands built-in, sometimes just names of programs If the latter, adding
new features doesn’t require shell modification

User Operating System Interface - GUI


 User-friendly desktop metaphor interface
 Usually mouse, keyboard, and monitor
 Icons represent files, programs, actions, etc
 Various mouse buttons over objects in the interface cause various actions (provide
information, options, execute function, open directory (known as a folder)
 Invented at Xerox PARC
 Many systems now include both CLI and GUI interfaces
 Microsoft Windows is GUI with CLI “command” shell
 Apple Mac OS X as “Aqua” GUI interface with UNIX kernel underneath and shells
available
 Solaris is CLI with optional GUI interfaces (Java Desktop, KDE)

Bourne Shell Command Interpreter

The Mac OS X GUI

System Calls:

 Programming interface to the services provided by the OS


 Typically written in a high-level language (C or C++)
 Mostly accessed by programs via a high-level Application Program Interface (API) rather
than direct system call usenThree most common APIs are Win32 API for Windows,
POSIX API for POSIX-based systems (including virtually all versions of UNIX, Linux,
and Mac OS X), and Java API for the Java virtual machine (JVM)
 Why use APIs rather than system calls?(Note that the system-call names used throughout
this text are generic)
Example of System Calls

System Call Implementation


 Typically, a number associated with each system call
 System-call interface maintains a table indexed according to these numbers
 The system call interface invokes intended system call in OS kernel and returns status of
the system call and any return values
 The caller need know nothing about how the system call is implemented
 Just needs to obey API and understand what OS will do as a result call
 Most details of OS interface hidden from programmer by API Managed by run-time
support library (set of functions built into libraries included with compiler)

API – System Call – OS Relationship


System Call Parameter Passing
 Often, more information is required than simply identity of desired system call
 Exact type and amount of information vary according to OS and call

 Three general methods used to pass parameters to the OS


 Simplest: pass the parameters in registers
 In some cases, may be more parameters than registers
 Parameters stored in a block, or table, in memory, and address of block passed as a
parameter in a register
This approach taken by Linux and Solaris
 Parameters placed, or pushed, onto the stack by the program and popped off the stack by
the operating system
 Block and stack methods do not limit the number or length of parameters being passed

Parameter Passing via Table


Types of System Calls:

 Process control
 File management
 Device management
 Information maintenance
 Communications
 Protection

Process Control
A running program needs to be able to halt its execution either normally (end) or abnormally
(abort). If a system call is made to terminate the currently running program abnormally, or if the
program runs into a problem and causes an error trap, a dump of memory is sometimes taken and
an error message generated. The dump is written to disk and may be examined by a debugger—a
system program designed to aid the programmer in finding and correcting bugs-—to determine
the cause of the problem. Under either normal or abnormal circumstances, the operating system
must transfer control to the invoking command interpreter. The command interpreter then reads
the next command. In an interactive system, the command interpreter simply continues with the
next command; it is assumed that the user will issue an appropriate
command to respond to any error.

File Management
We first need to be able to create and delete files. Either system call requires the name of
the file and perhaps some of the file's attributes. Once the file is created, we need to open it and
to use it. We may also read, write, or reposition (rewinding or skipping to the end of the file, for
example). Finally, we need to close the file, indicating that we are no longer using it. We may
need these same sets of operations for directories if we have a directory structure for organizing
files in the file system. In addition, for either files or directories, we need to be able to determine
the values of various attributes and perhaps to reset them if necessary. File attributes include the
file name, a file type, protection codes, accounting information, and so on.
At least two system calls, get file attribute and set file attribute, are required for this
function. Some operating systems provide many more calls, such as calls for file move and copy.

Device Management
A process may need several resources to execute—main memory, disk drives, access to
files, and so on. If the resources are available, they can be granted, and control can be returned to
the user process. Otherwise, the process will have to wait until sufficient resources are available.
The various resources controlled by the operating sysstem can be thought of as devices. Some of
these devices are physical devices (for example, tapes), while others can be thought of as abstract
or virtual devices (for example, files). If there are multiple users of the system, the system may
require us to first request the device, to ensure exclusive use of it. After we are finished with the
device, we release it. These functions are similar to the open and close system calls for files.

Information Maintenance
Many system calls exist simply for the purpose of transferring information between the
user program and the operating system. For example, most systems have a system call to return
the current t i m e and date . Other system calls may return information about the system, such as
the number of current users, the version number of the operating system, the amount of free
memory or disk space, and so on.
In addition, the operating system keeps information about all its processes, and system
calls are used to access this information. Generally, calls are also used to reset the process
information (get process attributes and set process attributes) .

Communication
There are two common models of inter process communication: the message passing
model and the shared-memory model. In the message-passing model, the communicating
processes exchange messages with one another to transfer information. Messages can be
exchanged between the processes either directly or indirectly through a common mailbox. Before
communication can take place, a connection must be opened. The name of the other
communicator must be known, be it another process on the same system or a process on another
computer connected by a communications network. Each computer in a network has a host name
by which it is commonly known. A host also has a network identifier, such as an IP address.
Similarly, each process has a process name, and this name is translated into an identifier by which
the operating system can refer to the process. The get host id and get processid system calls do
this translation. The identifiers are then passed to the general purpose open and close calls
provided by the file system or to specific open connection and close connection system calls,
depending on the system's model of communication.
In the shared-memory model, processes use shared memory create and shared memory
attach system calls to create and gain access to regions of memory owned by other processes.
Recall that, normally, the operating system tries to prevent one process from accessing another
process's memory. Shared memory requires that two or more processes agree to remove this
restriction.
They can then exchange information by reading and writing data in the shared areas. The
form of the data and the location are determined by the processes and are not under the operating
system's control. The processes are also responsible for ensuring that they are not writing to the
same location simultaneously.

System Programs:

At the lowest level is hardware. Next is the operating system, then the system programs, and
finally the application programs. System programs provide a convenient environment for program
development and execution. Some of them are simply user interfaces to system calls; others are
considerably more complex.
They can be divided into these categories:
• File management. These programs create, delete, copy, rename, print, dump, list, and generally
manipulate files and directories.
• Status information. Some programs simply ask the system for the date, time, amount of
available memory or disk space, number of users, or similar status information. Others are more
complex, providing detailed
performance, logging, and debugging information. Typically, these programs format and print the
output to the terminal or other output devices or files or display it in a window of the GUI. Some
systems also support a registry, which is used to store and retrieve configuration information.
• File modification. Several text editors may be available to create and modify the content of
files stored on disk or other storage devices. There may also be special commands to search
contents of files or perform transformations of the text.
• Programming-language support. Compilers, assemblers, debuggers and interpreters for
common programming languages (such as C, C++, Java, Visual Basic, and PERL) are often
provided to the user with the operating system.
• Program loading and execution. Once a program is assembled or compiled, it must be loaded
into memory to be executed. The system may provide absolute loaders, relocatable loaders,
linkage editors, and overlay loaders. Debugging systems for either higher-level languages or
machine language are needed as well.
• Communications. These programs provide the mechanism for creating virtual connections
among processes, users, and computer systems. They allow users to send messages to one
another's screens, to browse web pages, to send electronic-mail messages, to log in remotely, or to
transfer files from one machine to another.

In addition to systems programs, most In addition to systems programs, most operating


systems are supplied with programs that are useful in solving common problems or performing
common operations. Such programs include web browsers, word processors and text formatters,
spreadsheets, database systems, compilers, plotting and statistical-analysis packages, and games.
These programs are known as system utilities or application programs.

Operating-System Structure:
Refer the page 6

Operating-System Debugging:

 Debugging is finding and fixing errors, or bugs


 OS generate log files containing error information
 Failure of an application can generate core dump file capturing memory of the process
 Operating system failure can generate crash dump file containing kernel memory
 Beyond crashes, performance tuning can optimize system performance
 Kernighan’s Law: “Debugging is twice as hard as writing the code in the rst place.
Therefore, if you write the code as cleverly as possible, you are, by definition, not smart
enough to debug it.”
 DTrace tool in Solaris, FreeBSD, Mac OS X allows live instrumentation on production
systems
 Probes fire when code is executed, capturing state data and sending it to consumers of
those probes

System Boot:

The procedure of starting a computer by loading the kernel is known as booting the system. On
most computer systems, a small piece of code known as the bootstrap program or bootstrap
loader locates the kernel, loads it into main memory, and starts its execution. Some computer
systems, such as PCs, use a two-step process in which a simple bootstrap loader fetches a more
complex boot program from disk, which in turn loads the kernel.
When a CPU receives a reset event—for instance, when it is powered up or rebooted—the
instruction register is loaded with a predefined memory location, and execution starts there. At
that location is the initial bootstrap program. This program is in the form of read-only memory
(ROM), because the RAM is in an unknown state at system startup. ROM is convenient because
it needs no initialization and cannot be infected by a computer virus.
The bootstrap program can perform a variety of tasks. Usually, one task is to run diagnostics to
determine the state of the machine. If the diagnostics pass, the program can continue with the
booting steps. It can also initialize all aspects of the system, from CPU registers to device
controllers and the contents of main memory. Sooner or later, it starts the operating system.
Some systems—such as cellular phones, PDAs, and game consoles—store the entire
operating system in ROM. Storing the operating system in ROM is suitable for small operating
systems, simple supporting hardware, and rugged operation. A problem with this approach is that
changing the bootstrap code requires changing the ROM hardware chips. Some systems resolve
this problem by using erasable programmable read-only memory (EPROM), which is read
only except when explicitly given a command to become writable. All forms of ROM are also
known as firmware, since their characteristics fall somewhere between those of hardware and
those of software. A problem with firmware in general is that executing code there is slower than
executing code in RAM.
Some systems store the operating system in firmware and copy it to RAM for fast
execution. A final issue with firmware is that it is relatively expensive, so usually only small
amounts are available.
For large operating systems (including most general-purpose operating systems like
Windows, Mac OS X, and UNIX) or for systems that change frequently, the bootstrap loader is
stored in firmware, and the operating system is on disk. In this case, the bootstrap runs diagnostics
and has a bit of code that can read a single block at a fixed location (say block zero) from disk
into memory and execute the code from that boot block. The program stored in the boot block
may be sophisticated enough to load the entire operating system into memory and begin its
execution. More typically, it is simple code (as it fits in a single disk block) and only knows the
address on disk and length of the remainder of the bootstrap program. All of the disk-bound
bootstrap, and the operating system itself, can be easily changed by writing new versions to disk.

You might also like