0% found this document useful (0 votes)
2 views

OS

The document discusses key concepts in operating systems, including deadlock, race conditions, synchronization, and memory management techniques like paging and segmentation. It explains the roles of the kernel, types of kernels (monolithic and microkernel), and the importance of system calls and multithreading. Additionally, it covers disk scheduling and the differences between multitasking and multiprocessing.

Uploaded by

27Preeti Desai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

OS

The document discusses key concepts in operating systems, including deadlock, race conditions, synchronization, and memory management techniques like paging and segmentation. It explains the roles of the kernel, types of kernels (monolithic and microkernel), and the importance of system calls and multithreading. Additionally, it covers disk scheduling and the differences between multitasking and multiprocessing.

Uploaded by

27Preeti Desai
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Operating System

Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some
other resource to be released.
Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops
its execution because it can't continue without R3. P3 also demands for R1 which is being
used by P1 therefore P3 also stops its execution.

Condition
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.
2. Hold and Wait
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other process can
be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.

Prevention:
Avoid Circular Wait: Assign a specific order to resources, and processes must request them
in that order.
No Hold and Wait: A process must request all required resources at once or release them
and try again later.
Allow Preemption: If a process is stuck, the OS can take away its resources and give them to
others.
Deadlock Detection: Periodically check if deadlock happens and take action, like aborting
processes or preempting resources.
Starvation is a situation where the low priority process got blocked and the high priority
processes proceed.
Prevention:
Fair Resource Allocation: Use scheduling algorithms like Round Robin that ensure every
process gets a fair share of resources.
Aging: Increase the priority of processes waiting too long, so they eventually get resources.

Race condition occurs when two or more processes or threads try to access and modify
shared data at the same time, leading to unpredictable results.
For example, if two threads update a shared counter simultaneously without proper
synchronization, the final value can be incorrect.
Race conditions are prevented using tools like mutexes or semaphores to control access to
shared resources.
Example of a Race Condition:
Imagine a ticket booking system:
1. Two users (threads) try to book the last available ticket simultaneously.
2. Both check that 1 ticket is available at the same time.
3. Both proceed to book it.
Result: The system ends up allocating the same ticket to both users, leading to incorrect
behavior.

Synchronization in operating systems ensures that multiple processes or threads can


work together without conflicts, especially when accessing shared resources or data. It helps
maintain consistency and prevents issues like race conditions.
Key Points:
1. It ensures orderly execution of threads or processes.
2. It avoids data inconsistency by controlling access to shared resources.
3. Tools like mutexes, semaphores, and monitors are used for synchronization.
Mutex
Mutex is a mutual exclusion object that synchronizes access to a resource. It is created with
a unique name at the start of a program. The mutex locking mechanism ensures only one
thread can acquire the mutex and enter the critical section. This thread only releases the
mutex when it exits in the critical section.
A thread locks the mutex before accessing a shared resource.
While the mutex is locked, other threads trying to access the resource must wait.
Once the thread finishes, it unlocks the mutex, allowing the next thread to proceed.

Semaphore is a counter used to manage access to shared resources by multiple threads.


 Threads decrease the semaphore when they use a resource and increase it when
they are done.
 If the semaphore is 0, threads must wait until it's greater than 0.

Kernel:
The kernel is the core part of an operating system that manages system resources like
memory, CPU, and input/output devices. It acts as a bridge between hardware and software.
Other part contain User Space shell, system libraries

Monolithic Kernel:
 All system services (like memory management, process scheduling, and device
drivers) run in the kernel space.
 Pros: Faster execution since all services are in one place.
 Cons: More complex and difficult to maintain, as a bug in any part can crash the
entire system.

Microkernel:
 Only essential services (like communication, basic memory management) run in the
kernel space. Other services run in user space.
 Pros: More modular and easier to maintain. If one service crashes, it doesn't affect
the entire system.
 Cons: Slower execution due to more context switching between kernel and user
space.
Key Difference:
 Monolithic Kernel: All services in kernel space.
 Microkernel: Only basic services in kernel space, others in user space.

Interrupt is a signal sent to the CPU to get its attention and stop its current task. When the
CPU receives an interrupt, it temporarily pauses its current task, saves its progress, and
starts executing a special function called an interrupt handler to address the event or
condition that caused the interrupt (like a user pressing a key or a hardware device needing
attention).
After handling the interrupt, the CPU resumes its previous task. Interrupts allow the system
to respond to important events without waiting for the current task to finish. An example of
an interrupt is when you click the mouse. The mouse sends an interrupt to the CPU, which
stops its current task to process the click and perform the appropriate action (like opening a
program or selecting something). Then, the CPU goes back to its previous task.There are two
main types of interrupts:
1. Hardware Interrupts: Generated by hardware devices (like keyboard, mouse, or timer) to
get the CPU's attention. For example, pressing a key on the keyboard or a mouse click
triggers a hardware interrupt.
2. Software Interrupts: Generated by programs or software. These are used to request
system services from the operating system, like when a program needs to read or write a
file.

Demand paging is a memory management scheme where pages of a process are only
loaded into physical memory when they are needed, or "demanded," by the process, rather
than being loaded all at once at the start.
In simple words, during demand paging, the operating system does not load the entire
program into memory at the beginning. It loads pages one by one as the program requests
them. If a page is not in memory, a page fault occurs, and the required page is loaded from
disk into memory.

Paging and segmentation


Paging is better when you want a simple, efficient memory management system with fixed-
size memory blocks that avoids external fragmentation. It works well for systems where the
size of processes and their components is unpredictable.
Segmentation is better when you want to organize memory according to the program's
logical structure (code, data, stack). It is useful for systems where logical division of memory
is important, but it requires more complex memory management.

In Practice:
Many modern operating systems use a combination of both paging and segmentation,
taking advantage of each technique's strengths. For example, a system might use paging for
physical memory management but use segmentation for organizing the virtual address
space of processes.
This helps in saving memory, as only the necessary pages are kept in RAM, and the rest are
stored on the disk.

System call is made by a program to request services from the operating system. When a
program needs to perform tasks like reading files, accessing hardware, or managing memory,
it cannot do it directly. Instead, it makes a system call to the OS, which then performs the
requested action on behalf of the program.
System calls provide a controlled way for programs to interact with the underlying system
resources.

Multithreading is a technique where a single program is divided into smaller parts called
threads, which can run simultaneously. Each thread represents a separate task within the
program.
In multithreading, multiple threads of the same process share the same resources (like
memory), but they can be executed independently, improving the program's efficiency,
especially on multi-core CPUs.
For example, a web browser might use one thread to load a webpage and another to handle
user input, making the program more responsive.

Multiprocessing and Multitasking are not the same.


Multitasking allows a single CPU to switch between tasks quickly, creating the illusion of
simultaneous execution, but it still runs only one process at a time on a single CPU core.
Multiprocessing, on the other hand, uses multiple CPUs or cores, where each processor
works on a separate task at the same time, enabling true parallel execution.
While multitasking can improve system efficiency on a single CPU by rapidly switching
between tasks, multiprocessing achieves true simultaneous execution, as each CPU or core
can run a different process independently.
Disk Scheduling:
Disk scheduling is done by operating systems to schedule I/O requests arriving for disk. Disk
scheduling is also known as I/O scheduling.
1. Seek Time: Seek time is the time taken to locate the disk arm to a specified track where
the data is to be read or written.
2. Rotational Latency: Rotational Latency is the time taken by the desired sector of disk to
rotate into a position so that it can access the read/write heads.
3. Transfer Time: Transfer time is the time to transfer the data. It depends on the rotating
speed of the disk and number of bytes to be transferred.
4. Disk Access Time: Seek Time + Rotational Latency + Transfer Time 5. Disk Response Time:
Response Time is the average of time spent by a request waiting to perform its I/O
operation. Average Response time is the response time of all requests

You might also like