0% found this document useful (0 votes)
196 views

Operating System

Pratham Singh Parmar, a student of Computer Science with roll number 19023C04041, completed 10 experiments on operating systems under the guidance of H.D. Patil. The experiments covered topics such as the functions of operating systems, types of operating systems, processes and process states, BIOS setup, CPU scheduling algorithms, page replacement algorithms, memory allocation strategies, disk scheduling algorithms, and segmentation. Pratham received grades for each experiment.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
196 views

Operating System

Pratham Singh Parmar, a student of Computer Science with roll number 19023C04041, completed 10 experiments on operating systems under the guidance of H.D. Patil. The experiments covered topics such as the functions of operating systems, types of operating systems, processes and process states, BIOS setup, CPU scheduling algorithms, page replacement algorithms, memory allocation strategies, disk scheduling algorithms, and segmentation. Pratham received grades for each experiment.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 19

Govt.

Polytechnic College
Khirsadoh

Name- Pratham Singh Parmar


-
Branch- Computer Science
-
Roll No- 19023C04041
Subject - Operating system

Guided By- Shree H.D Patil


sir.
Pratham Singh 19023C04041

S.no Experiment Name Grade

1. To the study of operating system and its


functions.

2. To the study of Types of operating system.

3. To the study of Process and state.

4. To the study of BIOS setup.

5. To the study of following CPU scheduling


1.FCFC
2.SJF
3.Round-robin

6. To the study of page replacement algorithm

7. To the study of first fit , best fit , next fit ,worst fit

8. To the study of disk scheduling algorithm

9. To the study of disk scheduling algorithm

10. To the study of segmenttation

1
Pratham Singh 19023C04041

Experiment 01

▪ Aim :- To the study of operating system and its functions.

▪ Theory

What is Operating System ?

An operating system is a program on which application programs are


executed and acts as an communication bridge (interface) between
the user and the computer hardware.
The main task an operating system carries out is the allocation of
resources and services, such as allocation of: memory, devices,
processors and information. The operating system also includes
programs to manage these resources, such as a traffic controller, a
scheduler, memory management module, I/O programs, and a file
system.

2
Pratham Singh 19023C04041

1. Security –
The operating system uses password protection to protect user data and
similar other techniques. it also prevents unauthorized access to
programs and user data.

2. Control over system performance –


Monitors overall system health to help improve performance. records the
response time between service requests and system response to have a
complete view of the system health. This can help improve performance
by providing important information needed to troubleshoot problems.

3. Job accounting –
Operating system Keeps track of time and resources used by various
tasks and users, this information can be used to track resource usage for
a particular user or group of user.

4. Error detecting aids –


Operating system constantly monitors the system to detect errors and
avoid the malfunctioning of computer system.

5. Coordination between other software and users –


Operating systems also coordinate and assign interpreters, compilers,
assemblers and other software to the various users of the computer
systems.

6. Memory Management –
The operating system manages the Primary Memory or Main Memory.
Main memory is made up of a large array of bytes or words where each
byte or word is assigned a certain address. Main memory is a fast
storage and it can be accessed directly by the CPU. For a program to be
executed, it should be first loaded in the main memory. An Operating
System performs the following activities for memory management:
It keeps tracks of primary memory, i.e., which bytes of memory are
used by which user program. The memory addresses that have already
been allocated and the memory addresses of the memory that has not
yet been used. In multi programming, the OS decides the order in which
process are granted access to memory, and for how long. It Allocates the
memory to a process when the process requests it and deallocates the
memory when the process has terminated or is performing an I/O
operation.

3
Pratham Singh 19023C04041

7. Processor Management –
In a multi programming environment, the OS decides the order in which
processes have access to the processor, and how much processing time
each process has. This function of OS is called process scheduling. An
Operating System performs the following activities for processor
management.
Keeps tracks of the status of processes. The program which perform this
task is known as traffic controller. Allocates the CPU that is processor to a
process. De-allocates processor when a process is no more required.

8. Device Management –
An OS manages device communication via their respective drivers. It
performs the following activities for device management. Keeps tracks of
all devices connected to system. designates a program responsible for
every device known as the Input/Output controller. Decides which process
gets access to a certain device and for how long. Allocates devices in an
effective and efficient way. Deallocates devices when they are no longer
required.

9. File Management –
A file system is organized into directories for efficient or easy navigation
and usage. These directories may contain other directories and other files.
An Operating System carries out the following file management activities.
It keeps track of where information is stored, user access settings and
status of every file and more… These facilities are collectively known as
the file system.

4
Pratham Singh 19023C04041

Experiment 02

Aim :- To the study of Types of operating system.

▪ Theory Types of Operating system

• Batch Operating System


• Multitasking/Time Sharing OS
• Multiprocessing OS
• Real Time OS
• Distributed OS
• Network OS
• Mobile OS

▪ 1.Batch Operating System :- Some computer processes are very lengthy and
time-consuming. To speed the same process, a job with a similar type of
needs are batched together and run as a group. The user of a batch operating
system never directly interacts with the computer. In this type of OS, every
user prepares his or her job on an offline device like a punch card and submit
it to the computer operator.
▪ 2.Multi-Tasking/Time-sharing Operating systems :- Time-sharing operating
system enables people located at a different terminal(shell) to use a single
computer system at the same time. The processor time (CPU) which is shared
among multiple users is termed as time sharing.
▪ 3.Real time OS :- A real time operating system time interval to process and
respond to inputs is very small. Examples: Military Software Systems, Space
Software Systems.
▪ 4.Distributed Operating System :- Distributed systems use many processors
located in different machines to provide very fast computation to its users.

5
Pratham Singh 19023C04041

▪ 5.Network Operating System :- Network Operating System runs on a server.


It provides the capability to serve to manage data, user, groups, security,
application, and other networking functions.
▪ 6.Mobile OS :- Mobile operating systems are those OS which is especially that
are designed to power smartphones, tablets, and wearables devices. Some
most famous mobile operating systems are Android and iOS, but others
include BlackBerry, Web, and watchOS.

6
Pratham Singh 19023C04041

Experiment 03
AIM :- To the study of process and it,s state

THYORY :- To put it in simple terms, we write our computer programs in a text file and when we
execute this program, it becomes a process which performs all the tasks mentioned in the program.
When a program is loaded into the memory and it becomes a process, it can be divided into four
sections ─ stack, heap, text and data. The following image shows a simplified layout of a process inside
main memory –

• Stack :- The process Stack contains the temporary data such as method/function parameters, return
address and local variables.

• Heap :- This is dynamically allocated memory to a process during its run time.

• Text :- This includes the current activity represented by the value of Program Counter and the
contents of the processor's registers.

• Data :- This section contains the global and static variables.

1. Start :- This is the initial state when a process is first started/created

2. Ready :- The process is waiting to be assigned to a processor. Ready processes are waiting to have
the processor allocated to them by the operating system so that they can run. Process may come into
this state after Start state or while running it by but interrupted by the scheduler to assign CPU to
some other process.

3. Running :- Once the process has been assigned to a processor by the OS scheduler, the process state
is set to running and the processor executes its instructions.

4. Waiting :- Process moves into the waiting state if it needs to wait for a resource, such as waiting for
user input, or waiting for a file to become available.

5. Terminated or Exit :- Once the process finishes its execution, or it is terminated by the operating
system, it is moved to the terminated state where it waits to be removed from main memory.

7
Pratham Singh 19023C04041

Experiment 04

AIM :- To study of the bios ( besic input output system ) setup

THYORY :- BIOS (basic input/output system) is the program a computer's microprocessor uses to start
the computer system after it is powered on. It also manages data flow between the computer's
operating system (OS) and attached devices, such as the hard disk, video adapter, keyboard, mouse
and printer.

BIOS work :- BIOS comes included with computers, as firmware on a chip on the motherboard. In
contrast, an OS like Windows or iOS can either be pre-installed by the manufacturer or vendor or
installed by the user. BIOS is a program that is made accessible to the microprocessor on an erasable
programmable read-only memory (EPROM) chip. When users turn on their computer, the
microprocessor passes control to the BIOS program, which is always located at the same place on
EPROM. When BIOS boots up a computer, it first determines whether all of the necessary attachments
are in place and operational. Any piece of hardware containing files the computer needs to start is
called a boot device. After testing and ensuring boot devices are functioning, BIOS loads the OS -- or
key parts of it -- into the computer's random access memory (RAM) from a hard disk or diskette drive
(the boot device). The 4 functions of BIOS BIOS identifies, configures, tests and connects computer
hardware to the OS immediately after a computer is turned on. The combination of these steps is
called the boot process. These tasks are each carried out by BIOS' four main functions:

1. Power-on self-test (POST). This tests the hardware of the computer before loading the OS.

2. Bootstrap loader. This locates the OS.

3. Software/drivers. This locates the software and drivers that interface with the OS once running.

4. Complementary metal-oxide semiconductor (CMOS) setup. This is a configuration program that


enable users to alter hardware and system settings. CMOS is the name of BIOS' non-volatile memory.
BIOS manufacturers BIOS, in its beginnings, was originally owned by IBM. However, some companies,
such as Phoenix Technologies, have reverse-engineered IBM's original version to create their own.
Phoenix, in doing this, allowed other companies to create clones of the IBM PC and, more importantly,
create non-IBM computers that work with BIOS. One company that did this was Compaq. Today, many
manufacturers produce motherboards with BIOS chips in them.

Some examples are the following:

8
Pratham Singh 19023C04041

• AMI • Asus • Foxconn • Hewlett Packard (HP) • Ricoh Knowing the motherboard manufacturer is
important because users may want to update their BIOS and chipset drivers -- the drivers that enable
the OS to work with other devices in the computer, such as a video card -- to the most recent versions.
Driver updates may improve computer performance or patch recent BIOS-level security
vulnerabilities. Each manufacturer has a unique way of updating these drivers.

9
Pratham Singh 19023C04041

Experiment 05

AIM :- To study of following cpu scheduling

(1) First come first serve

(2) Sorted time first

(3) Round robin

THYORY :- 1. First come first serve :- First Come First Serve (FCFS) is an operating system scheduling
algorithm that automatically executes queued requests and processes in order of their arrival. It is the
easiest and simplest CPU scheduling algorithm. In this type of algorithm, processes which requests the
CPU first get the CPU allocation first.

ALGORITHM :- This is a example of a cpu scheduling first come first serve Gant Chart: Process ID Arrival
Time (milliseconds) Burst Time (milliseconds) P1 0 5 P2 2 3 P3 6 2 P4 7 3 P1 0 P1 1 P1 2 P1 3 P1 4 P2 5
P2 6 P2 7 P3 8 P3 9 P4 10 P4 11 P4 12 13 Waiting time :- turn arround time - burst time Turn arround
time :- complition time – arriaval time Total turn arround time :- 5 + 6 + 4 + 6 = 21 ms Avarage turn
arround time :- 21 / 4 = 5.25 Total waiting time :- 0 + 3 + 2 + 3 = 8 Avarege waiting time :- 8 / 4 = 2 2.
SORTED JOB FIRST :- Program for Shortest Job First (or SJF) CPU Scheduling | Set 1 (Non- preemptive)
Shortest job first (SJF) or shortest job next, is a scheduling policy that selects the waiting process with
the smallest execution time to execute next. SJN is a nonpreemptive algorithm. Process Burst time
Arriaval time P1 21 0 P2 3 1 P3 6 2 P4 2 3 Gaint chart :- P1 0 P2 1 P4 3 P2 5 P3 6 P1 12 32 The average
waiting time :- ((5-3)+(6-2)+(12-1))/4=8.75 3.

Round Robin :- Round robin scheduling algorithm is used to schedule process fairly each job a time
slot or quantum and the interrupting the job if it is not completed by then the job come after the other
job which is arrived in the quantum time that makes these scheduling fairly Process Id Arrival time
Burst time P1 0 5 P2 1 3 P3 2 1 P4 3 2 P5 4 3

Gaint chart :- • Turn Around time = Exit time – Arrival time • Waiting time = Turn Around time – Burst
time Process Id Exit time Turn Around time Waiting time P1 13 13 – 0 = 13 13 – 5 = 8 P2 12 12 – 1 = 11
11 – 3 = 8 P3 5 5 – 2 = 3 3 – 1 = 2 P4 9 9 – 3 = 6 6 – 2 = 4 P5 14 14 – 4 = 10 10 – 3 = 7 • Average Turn
Around time = (13 + 11 + 3 + 6 + 10) / 5 = 43 / 5 = 8.6 unit • Average waiting time = (8 + 8 + 2 + 4 + 7)
/ 5 = 29 / 5 = 5.8 unit P1 P2 P3 P1 P4 P5 P2 P1 P5

10
Pratham Singh 19023C04041

Experiment 06

AIM :- To the study of page replacement algorithm

THYORY :- In a computer operating system that uses paging for virtual memory management, page
replacement algorithms decide which memory pages to page out, sometimes called swap out, or write
to disk, when a page of memory needs to be allocated.

ALGORITHMS Page Replacement AlgorithmsPage replacement algorithms help to decide which page
must be swapped out from the main memory to create a room for the incoming page. Various page
replacement algorithms are

1. FIFO Page Replacement Algorithm

2. LIFO Page Replacement Algorithm

3. LRU Page Replacement Algorithm 4. Optimal Page Replacement Algorithm

5. Random Page Replacement Algorithm FIFO Page Replacement

Algorithm - • As the name suggests, this algorithm works on the principle of “First in First out“.

• It replaces the oldest page that has been present in the main memory for the longest time.

• It is implemented by keeping track of all the pages in a queue.

LIFO Page Replacement Algorithm-

• As the name suggests, this algorithm works on the principle of “Last in First out“.

• It replaces the newest page that arrived at last in the main memory

. • It is implemented by keeping track of all the pages in a stack. LRU Page Replacement Algorithm-

• As the name suggests, this algorithm works on the principle of “Least Recently Used“.

• It replaces the page that has not been referred by the CPU for the longest time. Optimal Page
Replacement Algorithm-

• This algorithm replaces the page that will not be referred by the CPU in future for the longest time.
• It is practically impossible to implement this algorithm.

• This is because the pages that will not be used in future for the longest time can not be predicted.
•However, it is the best known algorithm and gives the least number of page faults.

11
Pratham Singh 19023C04041

• Hence, it is used as a performance measure criterion for other algorithms. Random Page
Replacement Algorithm-

• As the name suggests, this algorithm randomly replaces any page.

• So, this algorithm may behave like any other algorithm like FIFO, LIFO, LRU, Optimal etc. PRACTICE
PROBLEMS BASED ON PAGE REPLACEMENT ALGORITHMSProblem-01: A system uses 3 page frames
for storing process pages in main memory. It uses the First in First out (FIFO) page replacement policy.
Assume that all the page frames are initially empty. What is the total number of page faults that will
occur while processing the page reference string given below4 , 7, 6, 1, 7, 6, 1, 2, 7, 2 Also calculate
the hit ratio and miss ratio. SolutionTotal number of references = 10 Total number of page foult
accured = 6 Calculating hit ratio Total number of page hit = total number of refrences – total number
of page misses or page foult Total number of page hit = 10 – 6 = 4 Thus hit ratio Total number of page
hits / total number of reffrences 4 /10 = 0.4 or 40% Calculating miss ratio Thus miss ratio = total
number of page misses / total number of reffrences 6/10 = 0.6 or 60 % Altarnating miss ratio = 1 – hit
ratio = 1 – 0.4 = 0.6 or 60 %

12
Pratham Singh 19023C04041

Experiment 07

AIM :- To the study of first fit , best fit , next fit ,worst fit

THYORY :- First Fit :- In the first fit approach is to allocate the first free partition or hole large
enough which can accommodate the process. It finishes after finding the first suitable free
partition

Best Fit :- The best fit deals with allocating the smallest free partition which meets the
requirement of the requesting process. This algorithm first searches the entire list of free
partitions and considers the smallest hole that is adequate. It then tries to find a hole which
is close to actual process size needed.

Worst fit :- In worst fit approach is to locate largest available free portion so that the portion
left will be big enough to be useful. It is the reverse of best fit.

Next fit :- Next fit is a modified version of 'first fit'. It begins as the first fit to find a free
partition but when called next time it starts searching from where it left off, not from the
beginning. This policy makes use of a roving pointer. The pointer moves along the memory
chain to search for a next fit.

13
Pratham Singh 19023C04041

Experiment 08

AIM :- To the study of paging and demand paging

THYORY :-

PAGING :- Paging is a storage mechanism that allows OS to retrieve processes from the
secondary storage into the main memory in the form of pages. In the Paging method, the
main memory is divided into small fixed-size blocks of physical memory, which is called
frames. The size of a frame should be kept the same as that of a page to have maximum
utilization of the main memory and to avoid external fragmentation. Paging is used for faster
access to data, and it is a logical concept.

DEMAND PAGING :- According to the concept of Virtual Memory, in order to execute some
process, only a part of the process needs to be present in the main memory which means that
only a few pages will only be present in the main memory at any time. However, deciding,
which pages need to be kept in the main memory and which need to be kept in the secondary
memory, is going to be difficult because we cannot say in advance that a process will require
a particular page at particular time. Therefore, to overcome this problem, there is a concept
called Demand Paging is introduced. It suggests keeping all pages of the frames in the
secondary memory until they are required. In other words, it says that do not load any page
in the main memory until it is required.

14
Pratham Singh 19023C04041

Experiment 09

AIM :- To the study of disk scheduling algorithm

Thyory :- Disk scheduling is done by operating systems to schedule I/O requests arriving for
the disk. Disk scheduling is also known as I/O scheduling. ... Multiple I/O requests may arrive
by different processes and only one I/O request can be served at a time by the disk controller
TYPES OF DISK SCHEDULING ALGORITHMS Although there are other algorithms that reduce
the seek time of all requests, I will only concentrate on the following disk scheduling
algorithms:

1. First Come-First Serve (FCFS)

2. Shortest Seek Time First (SSTF)

3. Elevator (SCAN)

4. Circular SCAN (C-SCAN)

5. LOOK 6. C-LOOK

1. FIRST COME FIRST SERVE :- FCFS is the simplest disk scheduling algorithm. As the name
suggests, this algorithm entertains requests in the order they arrive in the disk queue Example
:- supose that The order is request is ( 82 , 170 , 43 , 140 , 24 , 16 , 190 ) And the current
position is 50 SO TOTAL SEEK TIME ( 82 - 50 ) + (170 – 82) + (170 – 43) + (140 - 43 ) +(140 - 24)
+(24 – 16) + (190 -16) Advantage,s • Every request gets a fair change

• No identifinite postponment

2. SORTED SEEK TIME FIRST :- SSTF (Shortest Seek Time First), requests having shortest seek
time are executed first. So, the seek time of every request is calculated in advance in the
queue and then they are scheduled according to their calculated seek time. As a result, the
request near the disk arm will get executed first. SSTF is certainly an improvement over FCFS
as it decreases the average response time and increases the throughput of system.Let us
understand this with the help of an example. Example: Suppose the order of request is-

15
Pratham Singh 19023C04041

(82,170,43,140,24,16,190) And current position of Read/Write head is : 50 So, total seek time:
=(50-43)+(43-24)+(24-16)+(82-16)+(140-82)+(170-40)+(190-170) =208

Advantages: • Average Response Time decreases

• Throughput increases

3. SCAN: -In SCAN algorithm the disk arm moves into a particular direction and services the requests
coming in its path and after reaching the end of disk, it reverses its direction and again services the
request arriving in its path. So, this algorithm works as an elevator and hence also known as elevator
algorithm. As a result, the requests at the midrange are serviced more and those arriving behind the
disk arm will have to wait. Example :- Suppose the requests to be addressed are-
82,170,43,140,24,16,190. And the Read/Write arm is at 50, and it is also given that the disk arm should
move “towards the larger value”. Therefore, the seek time is calculated as: =(199-50)+(199-16) =332
Advantages:

• High throughput

• Low variance of response time

• Average response time

4. CSCAN: In SCAN algorithm, the disk arm again scans the path that has been scanned, after reversing
its direction. So, it may be possible that too many requests are waiting at the other end or there may
be zero or few requests pending at the scanned area. These situations are avoided in CSCAN algorithm
in which the disk arm instead of reversing its direction goes to the other end of the disk and starts
servicing the requests from there. So, the disk arm moves in a circular fashion and this algorithm is
also similar to SCAN algorithm and hence it is known as C-SCAN (Circular SCAN).

Example: Suppose the requests to be addressed are82,170,43,140,24,16,190. And the Read/Write


arm is at 50, and it is also given that the disk arm should move “towards the larger value”. Seek time
is calculated as: =(199-50)+(199-0)+(43-0) =391 Advantages: • Provides more uniform wait time
compared to SCAN 5. LOOK: It is similar to the SCAN disk scheduling algorithm except for the difference
that the disk arm in spite of going to the end of the disk goes only to the last request to be serviced in
front of the head and then reverses its direction from there only. Thus it prevents the extra delay
which occurred due to unnecessary traversal to the end of the disk. Example: Suppose the requests to
be addressed are82,170,43,140,24,16,190. And the Read/Write arm is at 50 and it is also given that
the disk arm should move “towards the larger value”. So, the seek time is calculated as: =(190-
50)+(190-16) =314 CLOOK: - As LOOK is similar to SCAN algorithm, in similar way, CLOOK is similar to

16
Pratham Singh 19023C04041

CSCAN disk scheduling algorithm. In CLOOK, the disk arm in spite of going to the end goes only to the
last request to be serviced in front of the head and then from there goes to the other end’s last
request. Thus, it also prevents the extra delay which occurred due to unnecessary traversal to the end
of the disk. Example: Suppose the requests to be addressed are82,170,43,140,24,16,190. And the
Read/Write arm is at 50, and it is also given that the disk arm should move “towards the larger value”
So, the seek time is calculated as: =(190-50)+(190-16)+(43-16) =341

17
Pratham Singh 19023C04041

Experiment 10

AIM :- To the study of segmenttation

THYORY :- A process is divided into Segments. The chunks that a program is divided into which
are not necessarily all of the same sizes are called segments. Segmentation gives user’s view
of the process which paging does not give. Here the user’s view is mapped to physical
memory. There are types of segmentation:

1. Virtual memory segmentation – Each process is divided into a number of segments, not all
of which are resident at any one point in time.

2. Simple segmentation – Each process is divided into a number of segments, all of which are
loaded into memory at run time, though not necessarily contiguously. There is no simple
relationship between logical addresses and physical addresses in segmentation. A table stores
the information about all such segments and is called Segment Table. Segment Table – It maps
two-dimensional Logical address into onedimensional Physical address. It’s each table entry
has:

• Base Address: It contains the starting physical address where the segments reside in
memory.

• Limit: It specifies the length of the segment. Address generated by the CPU is divided into:
• Segment number (s): Number of bits required to represent the segment.

• Segment offset (d): Number of bits required to represent the size of the segment.
Advantages of Segmentation – • No Internal fragmentation.

• Segment Table consumes less space in comparison to Page table in paging

18

You might also like