Project
Project
EVEREST INNOVATIVE
COLLEGE
SOALTEEMODE, KATHMANDU
NEPAL
Lab Reports ON
OPERATING SYSTEM
Date:- …………………….. Total No of Experiments:- 08
Year/Part: II/II
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
BACKGROUND THEORY:
VMWARE: It provides cloud computing and virtualization software and services. It also
provides a completely virtualized set of hardware to the
guest operating system. VMware software virtualizes the hardware for a video adapter, a
network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB,
serial, and parallel devices.
Linux Kernel: It is a free, open-source, monolithic, modular and multitasking operating system
kernel. It resembles the UNIX system.
RHEL: RHEL is an acronym for Red Hat Enterprise Linux. It is a linux distribution developed
by Red hat for the commercial market. RHEL is an open source as we can view its source code,
download it and make our own customized versions. Some of the notable Linux distros that are
actually derived from RHEL include CentOS, Oracle Enterprise Linux, Scientific Linux and Pie
Box Enterprise Linux.
NEPAL
BACKGROUND THEORY:
Thread: A thread is a single sequence stream within in a process. Because threads have some
of the properties of processes, they are sometimes called lightweight processes .
Multithreading: Threads are popular way to improve application through parallelism. For
example, in a browser, multiple tabs can be different threads. MS word uses multiple threads,
one thread to format the text, other thread to process inputs, etc.
Threads operate faster than processes due to following reasons:
1) Thread creation is much faster.
2) Context switching between threads is much faster.
3) Threads can be terminated easily
4) Communication between threads is faster.
P-thread: The POSIX thread libraries are a standards based thread API for C/C++. It allows
one to spawn a new concurrent process flow. It is most effective on multi-processor or multi-
core systems where the process flow can be scheduled to run on another processor thus gaining
speed through parallel or distributed processing. Threads require less overhead than "forking"
or spawning a new process because the system does not initialize a new system virtual memory
space and environment for the process.
CONCLUSION
Hence, we created the multithreaded process.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
LAB NO: 3
TITLE: “PROGRAM TO IMPLEMENT IPC MECHANISM USING
SHARED MEMORY & MESSAGE PASSING”
OBJECTIVE: To implement IPC mechanism using shared memory and message passing
BACKGROUND THEORY:
Inter-process communication (IPC) is a mechanism that allows processes to communicate with each other
and synchronize their actions. The communication between these processes can be seen as a method of
co-operation between them. Processes can communicate with each other through both:
Shared Memory
Message passing
CONCLUSION
Hence, we implemented IPC mechanism using shared memory and message passing.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
Subject: Operating System
LAB REPORT ON
“PROGRAM TO SIMULATE PRODUCER CONSUMER PROBLEM
USING SEMAPHORE”
Date:-………………… Experiment No:- 04
Submitted By: Submitted To:
Name: Apeksha Kafle Department of BCA
Roll No: 1 (Er. Saurabh Karn; Lecturer/Supervison
Faculty: Humanities
Year/Part: II/II
LAB NO: 4
BACKGROUND THEORY:
The producer-consumer problem is an example of a multi-process synchronization problem. The
problem describes two processes, the producer and the consumer that shares a common fixed-
size buffer use it as a queue.
The producer’s job is to generate data, put it into the buffer, and start again. At the same time,
the consumer is consuming the data (i.e., removing it from the buffer), one piece at a time.
Problem: Given the common fixed-size buffer, the task is to make sure that the producer can’t
add data into the buffer when it is full and the consumer can’t remove data from an empty buffer.
Solution: The producer is to either go to sleep or discard data if the buffer is full. The next time
the consumer removes an item from the buffer, it notifies the producer, who starts to fill the
buffer again. In the same manner, the consumer can go to sleep if it finds the buffer to be empty.
The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
DISCUSSION
We performed the producer consumer problem and found out that it is a multi-process
synchronization problem. We also came to know about the fact that producer produces the item
and enters them into the buffer and consumer removes the item from the buffer and consumes
them. We used semaphore to solve the problem of producer consumer.
CONCLUSION
Hence, we simulated producer consumer problem using semaphore.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
OBJECTIVE: To simulate and find average tat and waiting time for preemptive and non-
preemptive scheduling algorithms: FCFS, SJF, Priority and Round Robin
BACKGROUND THEORY:
Turnaround Time: It is the time interval between the submission of a process and its
completion.
Waiting Time: It is the difference between turnaround time and burst time.
First Come, First Served (FCFS): It is also known as First In, First Out(FIFO) which is the
CPU scheduling algorithm in which the CPU is allocated to the processes in the order they are
queued in the ready queue.
Shortest Job First (SJF): It is an algorithm in which the process having the smallest execution
time is chosen for the next execution.
Priority Scheduling: It is a method of scheduling processes that is based on priority. In this
algorithm, the scheduler selects the tasks to work as per the priority. The processes with
higher priority should be carried out first, whereas jobs with equal priorities are carried out on a
round-robin or FCFS basis.
Round robin: It is a CPU scheduling algorithm that is designed especially for time sharing
systems. It is more like a FCFS scheduling algorithm with one change that in Round Robin
processes are bounded with a quantum time size.
OBERVATIONS AND FINDINGS
Fcfs
Source code
Output
Sjf
Source code
Output
Round Robin
Source code
Output
Priotity
Source code
Output
DISCUSSION
We performed the Turnaround Time and Waiting Time. Also we talked about FCFS, SJF,
Priority and Round Robin methods for both preemptive and non-preemptive scheduling.
CONCLUSION
Hence, we simulated and found average tat and waiting time for preemptive and non-preemptive
scheduling algorithms: FCFS, SJF, Priority and Round Robin.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
OBJECTIVES:
To simulate contiguous memory allocation techniques: Worst Fit, Best Fit and First Fit
BACKGROUND THEORY:
Contiguous memory allocation: It is basically a method in which a
single contiguous section/part of memory is allocated to a process or file needing it. ... We can
implement/achieve contiguous memory allocation by dividing the memory partitions into fixed
size partitions
Worst fit: allocates a process to the partition which is largest sufficient among the freely
available partitions available in the main memory. If a large process comes at a later stage, then
memory will not have space to accommodate it.
Best fit: It allocates the process to a partition which is the smallest sufficient partition among the
free available partitions.
First fit: In the first fit, the partition is allocated which is first sufficient from the top of main
memory.
Worstfit
Source code
Output
NEPAL
LAB NO: 7
TITLE: “PROGRAM TO SIMULATE PAGE REPLACEMENT ALGORITHMS: FIFO,
LRU, & LFU”
OBJECTIVES:
To simulate page replacement algorithms: FIFO, LRU and LFU
BACKGROUND THEORY:
Page replacement algorithm: It is needed to decide which page needs to be replaced when
new page comes in.
FIFO: In this algorithm, the operating system keeps track of all pages in the memory in a
queue, the oldest page is in the front of the queue. When a page needs to be replaced page in
the front of the queue is selected for removal.
LRU: In LRU, whenever page replacement happens, the page which has not been used for the
longest amount of time is replaced.
LFU: It is a caching algorithm in which the least frequently used cache block is removed
whenever the cache is overflowed .
CONCLUSION: Hence, we simulated page replacement algorithms: FIFO, LRU and LFU.
TRIBHUVAN U NIVERSITY
EVEREST INNOVATIVE
COLLEGE
SOLTIMODE, KATHMANDU
NEPAL
OBJECTIVES:
To simulate disk scheduling algorithms: FCFS, SCAN and C-SCAN
BACKGROUND THEORY:
Disk Scheduling Algorithms: Disk scheduling is done by operating systems to
schedule I/O requests arriving for the disk. Disk scheduling is also known as I/O
scheduling.
FCFS: The simplest form of disk scheduling is, of course, the first-come, first-served (FCFS)
algorithm. This algorithm is intrinsically fair, but it generally does not provide the fastest service.
SCAN: In the SCAN algorithm, the disk arm starts at one end of the disk and moves toward the
other end, servicing requests as it reaches each cylinder, until it gets to the other end of the disk.
At the other end, the direction of head movement is reversed, and servicing continues. The head
continuously scans back and forth across the disk. The SCAN algorithm is sometimes called the
elevator algorithm, since the disk arm behaves just like an elevator in a building, first servicing
all the requests going up and then reversing to service requests the other way.