0% found this document useful (0 votes)
2 views

Operating System (1)

The document discusses the evolution of operating systems, detailing their development from batch processing systems to modern mobile and cloud operating systems. It also covers operating system structures, CPU scheduling algorithms, interprocess communication, process control blocks, thrashing, and file access methods. Each section provides insights into the functionality and challenges associated with operating systems in managing processes and resources.

Uploaded by

avikm711
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Operating System (1)

The document discusses the evolution of operating systems, detailing their development from batch processing systems to modern mobile and cloud operating systems. It also covers operating system structures, CPU scheduling algorithms, interprocess communication, process control blocks, thrashing, and file access methods. Each section provides insights into the functionality and challenges associated with operating systems in managing processes and resources.

Uploaded by

avikm711
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Centre for Distance & Online Education

ASSIGNMENT

NAME PRITAM
ROLL NO 2314516345
SESSION MARCH 2024
PROGRAM BACHELOR OF COMPUTER APPLICATION (BCA)
SEMESTER II
COURSE CODE & NAME DCA1201 – OPERATING SYSTEM

QUESTION 1: Explain the evolution of operating systems. Write a brief note on operating system structures

ANSWER: Evolution of Operating Systems : Operating systems (OS) have undergone multiple eras of evolution, each
distinguished by notable conceptual and technology advances:
1. Batch Processing Systems (1950s-1960s)
 Early Computers: Operating systems were not present in the first computers. Manual program input allowed the
computer to process one task at a time.
 Batch Processing: Sequential, non-interactive job execution of programs was done in groups. Because numerous jobs
could be processed without the need for human interaction, efficiency rose.

2. Multiprogramming and Time-Sharing Systems (1960s-1970s)


 Multiprogramming: Enabled the OS to schedule jobs to optimize CPU consumption while allowing multiple jobs to run
concurrently in memory.
 Time-Sharing: Created the appearance of concurrent execution by granting each user a time slice of the CPU, allowing
numerous users to interact with the system at once.

3. Real-Time Systems (1970s-Present)


 designed for uses where fast reaction times and exact timing are essential. common in time-sensitive applications
such as industrial control systems, embedded systems, and others.

4. Personal Computer Operating Systems (1980s-Present)


 One of the first personal computer operating systems, MS-DOS (1981), featured a command-line interface.

 Graphical user interfaces (GUIs) were introduced by Windows and Mac OS, which improved computer use and
increased accessibility for a wider range of users.

 UNIX and Linux: UNIX brought robust multitasking and multiuser features, whereas Linux, an open-source variant,
gained popularity on desktops and in server contexts.

5. Distributed and Network Operating Systems (1990s-Present)


 facilitated the seamless sharing of resources and information by connecting various platforms.

 Examples include distributed operating systems, which control a collection of different computers and give the
impression that they are a single, cohesive system, and network operating systems (NOS), such as Novell NetWare.

6. Modern Operating Systems (2000s-Present)


 Mobile operating systems: Touch interfaces and effective power management are the main features of these
operating systems, which include Android and iOS.

 Cloud operating systems: made to administer cloud infrastructure and give users online access to scalable
resources.

 Virtualization and Containerization: By offering isolated environments for programs to function in, technologies
like VMware and Docker have completely changed how applications are delivered and managed.

7. Operating System Structures


Operating system architectures specify how various OS components are arranged and communicate with one another.
Numerous structures have been devised, with each possessing pros and cons.
i) Monolithic Systems
 Description: In kernel mode, every OS service operates in a single address space.

 Positives: Direct access to hardware and low overhead make it efficient.

 Cons: Hard to maintain and troubleshoot; one malfunction might bring down the entire system.

 Examples are MS-DOS and early UNIX.


ii) Layered Systems
 The operating system is layered, with the hardware being the lowest layer. Each layer is built upon the one below
it.

 Advantages: Makes system verification and debugging easier; each layer only communicates with layers above it.

 Cons: Because of the tiered method, it may be less efficient .

 Examples include early iterations of Windows NT and THE operating system.

iii) Microkernels
 Description: Kernel mode is only used to run fundamental services (such basic inter-process communication and
basic scheduling). User mode is used by other services.

 Advantages: Enhanced dependability and safety; more easily expanded and altered.

 Cons: The performance penalty resulting from the kernel mode and user mode context swapping.
Examples are Minix, QNX, and Mach.

iv) Modules
 Description: Modular in design and akin to monolithic systems, enabling dynamic loading and unloading of kernel
modules.

 Positives: enhanced maintainability, flexibility, and ease of extension.


 Cons: There is a little bit more overhead than with pure monolithic systems.

 Examples are Solaris and Modern Linux.

v) Virtual Machines
 The operating system (OS) facilitates the simultaneous operation of numerous instances of distinct OSes on the
same hardware by abstracting away the underlying hardware.

 Advantages: Security and isolation; support for several OS environments at once.

 Cons: Virtualization's impact on performance.


 Examples are Hyper-V, VMware, and VirtualBox.

QUESTION 2: What is Scheduling? Discuss the CPU scheduling algorithms

ANSWER: In the context of computers, scheduling is the process of allocating resources to carry out a task that has been
specified. It is a fundamental idea in real-time operating system (RTOS) design, multiprocessing operating system design,
and computer multitasking. Optimizing computer system performance and guaranteeing efficient completion of all activities
are the primary goals of scheduling.
 CPU Scheduling Algorithms
The process of choosing which processes in the ready queue should be assigned to the CPU for execution is known as
CPU scheduling. The jobs are scheduled using a variety of algorithms, each of which has benefits and drawbacks.

1) First-Come, First-Served (FCFS)


Description
 The most basic scheduling algorithm.

 The order in which processes appear in the ready queue determines how they are dispatched.

 Advantages:
 Easy to put into practice.

 Fair in that first-come, first-served policy is followed.

 Disadvantages:
 may result in the "convoy effect," in which quick operations must wait for lengthy processes to finish.

 Not the best for systems that share time.

2) Shortest Job Next (SJN) or Shortest Job First (SJF)


 Description:
 chooses the process whose execution time is the shortest.

 either preventive or not preventive.

 Advantages:
 reduces the typical time spent waiting for processes.

 Drawbacks:
 has to know how long the next CPU burst will last, which isn't always possible.

 may cause lengthier processes to starve.

3) Summary of Priority Scheduling:


 A priority is given to every process.
 The process with the highest priority (smallest integer ≡ highest priority) receives the CPU.

 Benefits

 able to deliver strong performance on jobs with a high priority.

 Drawbacks:

 may cause lower-priority processes to go hungry.

 needs a safeguard against unending blocking, like aging, which raises a waiting process's priority bit by bit.

4) Description of Round Robin (RR):

In cyclic order, a set time slice (quantum) is assigned to each process.


A process gets pushed to the back of the queue if its execution is not completed in the allotted time slice.

 Benefits
 strikes an excellent mix between justice and responsiveness.

 Ideal for time-sharing arrangements.


 Drawbacks:

 Selecting the right time quantum is essential. It encourages excessive context switching if it is too brief. An
excessive length of time may result in slow reaction times.

5) Description of Multilevel Queue Scheduling:

 Various queues are created for processes according on their priorities and other attributes.
There may be a different scheduling algorithm for each queue.

 Benefits

 able to adapt various scheduling strategies to various process kinds.

 Drawbacks:

 complexity involved in running several queues.

 Starvation risk if lower-priority queues are not sufficiently attended to.

6) Description of Multilevel Feedback Queue Scheduling:

Comparable to multilevel queue scheduling, but with the ability for processes to switch between queues according to
age and behavior.

Benefits
 Dynamically modifying the order of processes.

 increased efficiency across a range of workloads .

 Drawbacks:

 extremely difficult to manage and implement.

7) Description of Shortest Remaining Time First (SRTF):

 A preventive SJF variant.

 The next procedure chosen is the one with the smallest anticipated time left to finish.

 Benefits

 gives the lowest possible average waiting time.

 Drawbacks:

 needs accurate knowledge of the remaining time, which could not be available.

 may result in famine.

QUESTION 3: Discuss Interprocess Communication and critical-section problem along with use of semaphores .

ANSWER: The methods an operating system offers to let processes handle shared data are referred to as interprocess
communication, or IPC. Enabling data flow between processes—whether they are operating on the same system or
separate machines connected by a network—is the aim of interprocess communication, or IPC. Typical IPC techniques
consist of:
I. Shared Memory: A portion of memory that is used by several processes is shared. This region can be read and written
to by processes, facilitating quick data exchange.

II. Message Passing: Message passing is the exchange of messages between processes. You can use the following to
accomplish this:

 Pipes: A stream of data that moves between processes in both directions, either one way or both.

 Message Queues: A message queue into which processes can insert and retrieve messages.

 Sockets : Sockets are data transmission and receiving endpoints that are commonly utilized in network
communication.

 Semaphores: Used to manage access to shared resources and to signal amongst processes.

 Signals: Signals are brief alerts that are issued to a process in response to certain circumstances, such as requests for
termination.

III. Problem in the Critical Section


When several processes need to concurrently access and alter shared resources, the critical-section problem occurs.
The portion of the code that allows access to shared resources is called the critical section of a process. The vital
portion must only be executed by one process at a time in order to prevent race situations.
IV. Conditions for a Resolution
 The critical-section problem solution needs to meet three essential requirements:

 Mutual Exclusion: In the crucial area, only a single process can run at any given moment.

 Progress: One of the processes wishing to enter the critical area will be allowed to do so if there are no processes
in the critical part already.

 Bounded Waiting: After a process requests to enter its critical section and before the request is granted, there is a
limit on how many times other processes are permitted to enter their critical sections.

SET II

QUESTION 4:
What is a Process Control Block? What information does it hold and why?
a)
ANSWER: Operating systems employ a data structure called a Process Control Block (PCB) to hold all of the information
about a process. It is necessary for an operating system that supports multitasking to manage and control processes. Every
process has a PCB, which is kept up to date and holds vital data that the operating system needs to efficiently oversee the
operation.
 Details Stored in a PCB Process Identification Data:
 Process ID (PID): A special number assigned to the process.
 Parent Process ID (PPID): The parent process's identifier.

 The identifiers for the person or group that owns the process are the User ID (UID) and the Group ID (GID).

I. Information about Process State:

Procedure Status: Denotes the present condition of the procedure (e.g., in progress, finished, awaiting approval, etc.).
Program Counter (PC): The address of the following command to be carried out.

II. Registers of the CPU:

the values of each process's CPU register. When the context changes, these are preserved and restored.
Details about CPU Scheduling:

 Priority: The process's degree of priority.

 Pointers to the process's place in scheduling queues are known as scheduling queue pointers.

 Additional Scheduling Factors: data that the scheduler utilizes.

III. Details on Memory Management:

Addresses that specify the address space of the process are called Base and Limit Registers.
Structures for memory protection and address translation are called segment tables or page tables.
IV. Accounting Data:
 CPU Usage: The amount of time the process uses the CPU.

 Execution time total, or elapsed time.

 Time Restrictions: Any mandated time constraints for execution.

 Information about I/O Status:

 List of Open Files: Documents that the process has opened.

 I/O Devices Allotted: A list of the I/O devices that the process has been assigned.

 I/O operations that are still pending: Pending requests.

 Information Communication Across Processes

 Details on the process's use of signals, messages, and other IPC protocols.

 Why the PCB Holds This Information

 Handling Procedures:

The PCB enables the operating system to control the beginning, middle, and end of processes.
 Changing Context:

The current process's state must be preserved and the following process's state must be loaded when a context
switch occurs. All the information required to operate this switch effectively is contained on the PCB.

 Allocation of Resources:

Information on the resources allotted to the process is included in the PCB, guaranteeing effective resource
management and release.

 Planning:

In order to determine which process should run next based on priority and other scheduling criteria, the scheduler
uses data from the PCB.

 Safety and Protection:

Data regarding memory usage and user rights guarantees that processes run within the allotted resources and do
not conflict with one another.

 Accounting and Observation:

The PCB aids in monitoring system performance and enforcing resource consumption policies by tracking resource
usage and execution time.

b) What is Thrashing? What are its causes?


ANSWER: Thrashing is a performance problem in computer systems, especially when it comes to managing virtual memory,
where the system swaps pages in and out of memory for extended periods of time instead of carrying out real tasks. The
system performance is severely degraded as a result of this excessive paging.

 Reasons for Throwing Up


 Thrashing happens for a number of reasons, such as:

 Not Enough Physical Memory

 The system continuously swaps pages in and out of the disk when there is not enough physical RAM to support
the working sets of all the processes that are currently operating.
Extensive Multiprogramming:

 An excessive number of processes operating at once raises the memory requirements, which increases the
frequency of page faults and swapping.

 Inadequate Reference Location:


 A process may frequently access pages that are not currently in memory if it does not show adequate locality of
reference, which can result in a high rate of page faults.

 Big Worksets:

Thrashing can happen when a process's working set, or the set of pages it is now using, is enormous and the total
working sets of all the processes exceed the physical memory that is available.
 Incorrect Algorithms for Page Replacement:

Thrashing can be made worse by ineffective or improper page replacement rules. Higher rates of page faults, for
instance, may result from a page replacement mechanism that regularly replaces pages that are still in use.

 Thrashing's Effects on Severe Performance Degradation:

The system slows down significantly since it spends much of its time flipping pages instead of carrying out commands.

 Higher CPU Overhead

The CPU does not execute process instructions as much as it does handle page faults and memory management.

 Resource Conflict:

Significant growth in disk I/O operations results in competition for disk resources and may have an impact on the
efficiency of other I/O-bound processes.

 Solutions to Thrashing

 Changing the Multiprogramming Level:

By ensuring that the working sets of active processes fit into the available physical memory, thrashing can be
lessened by reducing the number of processes that are operating concurrently.

 Increasing Your Body's Memory:


Greater RAM can support bigger task sets and lessen the requirement for paging.

 Optimizing Algorithms for Page Replacement:

Page faults can be decreased by using more effective page replacement rules, such as Least Recently Used (LRU), or
other algorithms that consider the frequency and recentness of page accesses.

 Making Use of Location-aware Scheduling

By retaining active pages in memory, programs that are scheduled to take advantage of locality of reference can
reduce page faults.

QUESTION 5 :
a) Discuss the different File Access Methods.
ANSWER : Methods for reading and writing data to and from a storage medium are called file access methods. The
primary ways to access files are:

 Method of Sequential Access

Description: Records are accessed one after the other in a linear, sequential fashion. This process is comparable to that of a tape drive.

Usage: Usually applied to jobs that demand for processing a lot of data in a particular order.

Advantages:
Easy to put into practice.
Effective at handling and storing continuous data.

Cons:
It takes time to retrieve particular records because you have to navigate through older data.
Not as adaptable for sporadic access.

 Direct Access Method (Random Access): Information can be retrieved instantly from any point in the file without
having to go through earlier information. It's similar to retrieving information from a hard drive.

Use: Applies to situations requiring quick access to particular data.


Benefits: Fast access to particular documents.

 Advantages:

 Quick access to specific records.


 Flexible for various use cases.

 Disadvantages:

 Can be more complex to implement.


 May require additional overhead for indexing.

 Indexed Sequential Access Method (ISAM)

The combination of sequential and direct access methods is known as the Indexed Sequential Access Method, or ISAM.
Sequential access is used to find the precise record, while an index is used to find the overall area of the data rapidly.
Usage: Frequently utilized in database management systems where sequential processing and quick access are required.

Advantages: Records can be found more quickly than with simple sequential access.
Effective for sequential and random access patterns alike.

Cons: The expense of keeping the index current.


index management complexity.

 Hashed Access Technique

Description: Provides extremely fast access by utilizing a hash function to determine where the needed data is
located.

Application: Fit for databases and situations requiring fast inserts and lookups.

Benefits: Very quick access for both insertion and retrieval.


Most of the time, collisions—which occur when two records hash to the same place—are handled effectively.

Cons: The hash function needs to be well thought out in order to reduce collisions.
Inefficient when it comes to sequential access.

 Method of Clustered Access

Description: To improve locality of reference and efficiency, data is kept in clusters based on certain criteria.
Usage: Frequently employed in database systems to enhance query efficiency.

Benefits: Better query performance when accessing related records.


minimizes I/O operations by combining pertinent data.

Cons: Cluster management and upkeep might be complicated.


less adaptable to spontaneous inquiries.

b) What are I/O Control Strategies?


ANSWER : Operating systems employ I/O (Input/Output) govern Strategies to regulate and govern the communication
between the central processing unit (CPU) of a computer and peripheral devices, like hard drives, printers, keyboards, and
network interfaces. Effective resource allocation, data transport, and overall system performance depend on these tactics.
The following are important I/O control techniques:
 Definition of polling: An I/O device's state is frequently checked by the CPU to see whether it needs maintenance.

Pros: Easy to put into practice.

Cons: It is inefficient because CPU time is used even when no I/O is needed.

 Interrupt-Driven I/O Definition: When a device needs to be processed, it sends an interrupt signal to the central
processing unit (CPU), which frees it up to work on other tasks.

Advantages: Since the CPU is not continuously monitoring the device status, it is more efficient than polling.

Cons: To minimize overhead and complexity, a well-designed interrupt handling mechanism is needed.
 Definition of Direct Memory Access (DMA): DMA controllers handle data transfers between memory and I/O devices
instead of the CPU.

 Pros: Because it frees up the CPU to work on other tasks, it is very efficient for massive data transfers .

 Cons: Complex management and additional hardware (DMA controller) are needed.

 The term "programmed I/O" (PIO) refers to the transfer of data between I/O devices directly, bypassing the need for
DMA.

 Positives: Easy to use and directly controlled by software.

 Cons: It is inefficient because it uses a lot of CPU power.

 Definition of Spooling: Synchronous Peripheral Operation On-line. It entails putting data in a buffer—typically a disk—
so that an I/O device can process it later.

 Advantages: Beneficial for gadgets (like printers) that are unable to match the pace of the CPU.

 Cons: To manage the spooled data, more disk space and management are needed.

 Definition of buffering: The process of temporarily storing data while it is being transported between two locations,
such as a device and an application, using memory buffers.
Advantages: Minimizes waiting times by mitigating speed discrepancies between data producers and consumers
Cons: Increases memory requirements and administrative burden.

QUESTION 6: Explain the different Multiprocessor Interconnections and types of Multiprocessor Operating
Systems.

ANSWER: Interconnections are necessary in multiprocessor systems, where several processors cooperate to enable
communication and coordination amongst the processors. Multiprocessor linkages come in the following primary types :
 Interconnection of Shared Memory:

Bus-Based Systems: A popular and straightforward technique in which several processors access shared memory by
sharing a single bus. restricted scalability makes it appropriate for tiny systems.

Crossbar Switch: Uses a grid of switches to establish direct connections between processors and memory modules.
more sophisticated and costly than bus-based systems, but more scalable.

Multistage Interconnection Networks (MIN): Provides higher scalability by connecting processors and memory
modules over a series of stages of switches. Banyan, Omega, and Butterfly networks are a few examples.

 Dispersed Memory Interconnection:

Point-to-Point: Straight connections made by processor pairs. Effective in certain setups but not scalable for bigger
systems.
Ring: A circular network of processor connections. Two processors are linked to each other to form a ring. provides
modest simplicity and scalability.

Mesh/Torus: Either wrap-around connections are added (torus) or processors are placed in a grid (mesh). provides
reasonable simplicity in routing and high scalability.

Hypercube: Extremely scalable, with fault tolerance and effective routing due to each processor's connection to other
processors via binary numbering.

 Operating Systems with Multiple Processor Types:

Operating systems for multiprocessors oversee the operations and resources of multiprocessor systems, allowing for
effective coordination and use. Multiprocessor operating systems mostly fall into two categories:

 Closely Connected Systems:

Under symmetric multiprocessing (SMP), each processor has equal access to I/O devices and shares a single instance
of the operating system.

Benefits: Simplifies operating system architecture and programming. Any processor can run a process, which
efficiently balances the load.
Disadvantages: Competition for shared resources limits scalability.

 System Loosely Coupled:

With asymmetric multiprocessing (AMP), each processor has a predetermined role and its own instance of the
operating system (e.g., one processor may handle all I/O operations).

Benefits: Lessens competition for resources, which in some cases makes performance more predictable.

Cons: Compared to SMP, less versatile and more difficult to program.

 Dispersed Operating Systems:

Network of Independent Computers: Explanation: Every processor in the network runs a separate operating system
instance and has its own memory, yet they collaborate to address issues.

Benefits: Very fault-tolerant and scalable.

Cons: Due to the requirement for distributed algorithms and communication protocols, it is difficult to create and
administer.

 Systems in Clusters:

Array of Machines:
Description: A number of computers, or nodes, each running a different operating system and connected by a fast
network. To the end user, the cluster appears as a single system.

Benefits: Blends the simplicity of management of centralized systems with the high availability and scalability of
distributed systems.

Cons: Maintaining coherence between nodes can be difficult, and network overhead may constitute a bottleneck.

You might also like