Operating System (1)
Operating System (1)
ASSIGNMENT
NAME PRITAM
ROLL NO 2314516345
SESSION MARCH 2024
PROGRAM BACHELOR OF COMPUTER APPLICATION (BCA)
SEMESTER II
COURSE CODE & NAME DCA1201 – OPERATING SYSTEM
QUESTION 1: Explain the evolution of operating systems. Write a brief note on operating system structures
ANSWER: Evolution of Operating Systems : Operating systems (OS) have undergone multiple eras of evolution, each
distinguished by notable conceptual and technology advances:
1. Batch Processing Systems (1950s-1960s)
Early Computers: Operating systems were not present in the first computers. Manual program input allowed the
computer to process one task at a time.
Batch Processing: Sequential, non-interactive job execution of programs was done in groups. Because numerous jobs
could be processed without the need for human interaction, efficiency rose.
Graphical user interfaces (GUIs) were introduced by Windows and Mac OS, which improved computer use and
increased accessibility for a wider range of users.
UNIX and Linux: UNIX brought robust multitasking and multiuser features, whereas Linux, an open-source variant,
gained popularity on desktops and in server contexts.
Examples include distributed operating systems, which control a collection of different computers and give the
impression that they are a single, cohesive system, and network operating systems (NOS), such as Novell NetWare.
Cloud operating systems: made to administer cloud infrastructure and give users online access to scalable
resources.
Virtualization and Containerization: By offering isolated environments for programs to function in, technologies
like VMware and Docker have completely changed how applications are delivered and managed.
Cons: Hard to maintain and troubleshoot; one malfunction might bring down the entire system.
Advantages: Makes system verification and debugging easier; each layer only communicates with layers above it.
iii) Microkernels
Description: Kernel mode is only used to run fundamental services (such basic inter-process communication and
basic scheduling). User mode is used by other services.
Advantages: Enhanced dependability and safety; more easily expanded and altered.
Cons: The performance penalty resulting from the kernel mode and user mode context swapping.
Examples are Minix, QNX, and Mach.
iv) Modules
Description: Modular in design and akin to monolithic systems, enabling dynamic loading and unloading of kernel
modules.
v) Virtual Machines
The operating system (OS) facilitates the simultaneous operation of numerous instances of distinct OSes on the
same hardware by abstracting away the underlying hardware.
ANSWER: In the context of computers, scheduling is the process of allocating resources to carry out a task that has been
specified. It is a fundamental idea in real-time operating system (RTOS) design, multiprocessing operating system design,
and computer multitasking. Optimizing computer system performance and guaranteeing efficient completion of all activities
are the primary goals of scheduling.
CPU Scheduling Algorithms
The process of choosing which processes in the ready queue should be assigned to the CPU for execution is known as
CPU scheduling. The jobs are scheduled using a variety of algorithms, each of which has benefits and drawbacks.
The order in which processes appear in the ready queue determines how they are dispatched.
Advantages:
Easy to put into practice.
Disadvantages:
may result in the "convoy effect," in which quick operations must wait for lengthy processes to finish.
Advantages:
reduces the typical time spent waiting for processes.
Drawbacks:
has to know how long the next CPU burst will last, which isn't always possible.
Benefits
Drawbacks:
needs a safeguard against unending blocking, like aging, which raises a waiting process's priority bit by bit.
Benefits
strikes an excellent mix between justice and responsiveness.
Selecting the right time quantum is essential. It encourages excessive context switching if it is too brief. An
excessive length of time may result in slow reaction times.
Various queues are created for processes according on their priorities and other attributes.
There may be a different scheduling algorithm for each queue.
Benefits
Drawbacks:
Comparable to multilevel queue scheduling, but with the ability for processes to switch between queues according to
age and behavior.
Benefits
Dynamically modifying the order of processes.
Drawbacks:
The next procedure chosen is the one with the smallest anticipated time left to finish.
Benefits
Drawbacks:
needs accurate knowledge of the remaining time, which could not be available.
QUESTION 3: Discuss Interprocess Communication and critical-section problem along with use of semaphores .
ANSWER: The methods an operating system offers to let processes handle shared data are referred to as interprocess
communication, or IPC. Enabling data flow between processes—whether they are operating on the same system or
separate machines connected by a network—is the aim of interprocess communication, or IPC. Typical IPC techniques
consist of:
I. Shared Memory: A portion of memory that is used by several processes is shared. This region can be read and written
to by processes, facilitating quick data exchange.
II. Message Passing: Message passing is the exchange of messages between processes. You can use the following to
accomplish this:
Pipes: A stream of data that moves between processes in both directions, either one way or both.
Message Queues: A message queue into which processes can insert and retrieve messages.
Sockets : Sockets are data transmission and receiving endpoints that are commonly utilized in network
communication.
Semaphores: Used to manage access to shared resources and to signal amongst processes.
Signals: Signals are brief alerts that are issued to a process in response to certain circumstances, such as requests for
termination.
Mutual Exclusion: In the crucial area, only a single process can run at any given moment.
Progress: One of the processes wishing to enter the critical area will be allowed to do so if there are no processes
in the critical part already.
Bounded Waiting: After a process requests to enter its critical section and before the request is granted, there is a
limit on how many times other processes are permitted to enter their critical sections.
SET II
QUESTION 4:
What is a Process Control Block? What information does it hold and why?
a)
ANSWER: Operating systems employ a data structure called a Process Control Block (PCB) to hold all of the information
about a process. It is necessary for an operating system that supports multitasking to manage and control processes. Every
process has a PCB, which is kept up to date and holds vital data that the operating system needs to efficiently oversee the
operation.
Details Stored in a PCB Process Identification Data:
Process ID (PID): A special number assigned to the process.
Parent Process ID (PPID): The parent process's identifier.
The identifiers for the person or group that owns the process are the User ID (UID) and the Group ID (GID).
Procedure Status: Denotes the present condition of the procedure (e.g., in progress, finished, awaiting approval, etc.).
Program Counter (PC): The address of the following command to be carried out.
the values of each process's CPU register. When the context changes, these are preserved and restored.
Details about CPU Scheduling:
Pointers to the process's place in scheduling queues are known as scheduling queue pointers.
Addresses that specify the address space of the process are called Base and Limit Registers.
Structures for memory protection and address translation are called segment tables or page tables.
IV. Accounting Data:
CPU Usage: The amount of time the process uses the CPU.
I/O Devices Allotted: A list of the I/O devices that the process has been assigned.
Details on the process's use of signals, messages, and other IPC protocols.
Handling Procedures:
The PCB enables the operating system to control the beginning, middle, and end of processes.
Changing Context:
The current process's state must be preserved and the following process's state must be loaded when a context
switch occurs. All the information required to operate this switch effectively is contained on the PCB.
Allocation of Resources:
Information on the resources allotted to the process is included in the PCB, guaranteeing effective resource
management and release.
Planning:
In order to determine which process should run next based on priority and other scheduling criteria, the scheduler
uses data from the PCB.
Data regarding memory usage and user rights guarantees that processes run within the allotted resources and do
not conflict with one another.
The PCB aids in monitoring system performance and enforcing resource consumption policies by tracking resource
usage and execution time.
The system continuously swaps pages in and out of the disk when there is not enough physical RAM to support
the working sets of all the processes that are currently operating.
Extensive Multiprogramming:
An excessive number of processes operating at once raises the memory requirements, which increases the
frequency of page faults and swapping.
Big Worksets:
Thrashing can happen when a process's working set, or the set of pages it is now using, is enormous and the total
working sets of all the processes exceed the physical memory that is available.
Incorrect Algorithms for Page Replacement:
Thrashing can be made worse by ineffective or improper page replacement rules. Higher rates of page faults, for
instance, may result from a page replacement mechanism that regularly replaces pages that are still in use.
The system slows down significantly since it spends much of its time flipping pages instead of carrying out commands.
The CPU does not execute process instructions as much as it does handle page faults and memory management.
Resource Conflict:
Significant growth in disk I/O operations results in competition for disk resources and may have an impact on the
efficiency of other I/O-bound processes.
Solutions to Thrashing
By ensuring that the working sets of active processes fit into the available physical memory, thrashing can be
lessened by reducing the number of processes that are operating concurrently.
Page faults can be decreased by using more effective page replacement rules, such as Least Recently Used (LRU), or
other algorithms that consider the frequency and recentness of page accesses.
By retaining active pages in memory, programs that are scheduled to take advantage of locality of reference can
reduce page faults.
QUESTION 5 :
a) Discuss the different File Access Methods.
ANSWER : Methods for reading and writing data to and from a storage medium are called file access methods. The
primary ways to access files are:
Description: Records are accessed one after the other in a linear, sequential fashion. This process is comparable to that of a tape drive.
Usage: Usually applied to jobs that demand for processing a lot of data in a particular order.
Advantages:
Easy to put into practice.
Effective at handling and storing continuous data.
Cons:
It takes time to retrieve particular records because you have to navigate through older data.
Not as adaptable for sporadic access.
Direct Access Method (Random Access): Information can be retrieved instantly from any point in the file without
having to go through earlier information. It's similar to retrieving information from a hard drive.
Advantages:
Disadvantages:
The combination of sequential and direct access methods is known as the Indexed Sequential Access Method, or ISAM.
Sequential access is used to find the precise record, while an index is used to find the overall area of the data rapidly.
Usage: Frequently utilized in database management systems where sequential processing and quick access are required.
Advantages: Records can be found more quickly than with simple sequential access.
Effective for sequential and random access patterns alike.
Description: Provides extremely fast access by utilizing a hash function to determine where the needed data is
located.
Application: Fit for databases and situations requiring fast inserts and lookups.
Cons: The hash function needs to be well thought out in order to reduce collisions.
Inefficient when it comes to sequential access.
Description: To improve locality of reference and efficiency, data is kept in clusters based on certain criteria.
Usage: Frequently employed in database systems to enhance query efficiency.
Cons: It is inefficient because CPU time is used even when no I/O is needed.
Interrupt-Driven I/O Definition: When a device needs to be processed, it sends an interrupt signal to the central
processing unit (CPU), which frees it up to work on other tasks.
Advantages: Since the CPU is not continuously monitoring the device status, it is more efficient than polling.
Cons: To minimize overhead and complexity, a well-designed interrupt handling mechanism is needed.
Definition of Direct Memory Access (DMA): DMA controllers handle data transfers between memory and I/O devices
instead of the CPU.
Pros: Because it frees up the CPU to work on other tasks, it is very efficient for massive data transfers .
Cons: Complex management and additional hardware (DMA controller) are needed.
The term "programmed I/O" (PIO) refers to the transfer of data between I/O devices directly, bypassing the need for
DMA.
Definition of Spooling: Synchronous Peripheral Operation On-line. It entails putting data in a buffer—typically a disk—
so that an I/O device can process it later.
Advantages: Beneficial for gadgets (like printers) that are unable to match the pace of the CPU.
Cons: To manage the spooled data, more disk space and management are needed.
Definition of buffering: The process of temporarily storing data while it is being transported between two locations,
such as a device and an application, using memory buffers.
Advantages: Minimizes waiting times by mitigating speed discrepancies between data producers and consumers
Cons: Increases memory requirements and administrative burden.
QUESTION 6: Explain the different Multiprocessor Interconnections and types of Multiprocessor Operating
Systems.
ANSWER: Interconnections are necessary in multiprocessor systems, where several processors cooperate to enable
communication and coordination amongst the processors. Multiprocessor linkages come in the following primary types :
Interconnection of Shared Memory:
Bus-Based Systems: A popular and straightforward technique in which several processors access shared memory by
sharing a single bus. restricted scalability makes it appropriate for tiny systems.
Crossbar Switch: Uses a grid of switches to establish direct connections between processors and memory modules.
more sophisticated and costly than bus-based systems, but more scalable.
Multistage Interconnection Networks (MIN): Provides higher scalability by connecting processors and memory
modules over a series of stages of switches. Banyan, Omega, and Butterfly networks are a few examples.
Point-to-Point: Straight connections made by processor pairs. Effective in certain setups but not scalable for bigger
systems.
Ring: A circular network of processor connections. Two processors are linked to each other to form a ring. provides
modest simplicity and scalability.
Mesh/Torus: Either wrap-around connections are added (torus) or processors are placed in a grid (mesh). provides
reasonable simplicity in routing and high scalability.
Hypercube: Extremely scalable, with fault tolerance and effective routing due to each processor's connection to other
processors via binary numbering.
Operating systems for multiprocessors oversee the operations and resources of multiprocessor systems, allowing for
effective coordination and use. Multiprocessor operating systems mostly fall into two categories:
Under symmetric multiprocessing (SMP), each processor has equal access to I/O devices and shares a single instance
of the operating system.
Benefits: Simplifies operating system architecture and programming. Any processor can run a process, which
efficiently balances the load.
Disadvantages: Competition for shared resources limits scalability.
With asymmetric multiprocessing (AMP), each processor has a predetermined role and its own instance of the
operating system (e.g., one processor may handle all I/O operations).
Benefits: Lessens competition for resources, which in some cases makes performance more predictable.
Network of Independent Computers: Explanation: Every processor in the network runs a separate operating system
instance and has its own memory, yet they collaborate to address issues.
Cons: Due to the requirement for distributed algorithms and communication protocols, it is difficult to create and
administer.
Systems in Clusters:
Array of Machines:
Description: A number of computers, or nodes, each running a different operating system and connected by a fast
network. To the end user, the cluster appears as a single system.
Benefits: Blends the simplicity of management of centralized systems with the high availability and scalability of
distributed systems.
Cons: Maintaining coherence between nodes can be difficult, and network overhead may constitute a bottleneck.