OS
OS
Deadlock is a situation where each of the computer process waits for a resource which is
being assigned to some another process. In this situation, none of the process gets executed
since the resource it needs, is held by some other process which is also waiting for some
other resource to be released.
Let us assume that there are three processes P1, P2 and P3. There are three different
resources R1, R2 and R3. R1 is assigned to P1, R2 is assigned to P2 and R3 is assigned to P3.
After some time, P1 demands for R1 which is being used by P2. P1 halts its execution since it
can't complete without R2. P2 also demands for R3 which is being used by P3. P2 also stops
its execution because it can't continue without R3. P3 also demands for R1 which is being
used by P1 therefore P3 also stops its execution.
Condition
1. Mutual Exclusion
A resource can only be shared in mutually exclusive manner. It implies, if two process cannot
use the same resource at the same time.
2. Hold and Wait
A process waits for some resources while holding another resource at the same time.
3. No preemption
The process which once scheduled will be executed till the completion. No other process can
be scheduled by the scheduler meanwhile.
4. Circular Wait
All the processes must be waiting for the resources in a cyclic manner so that the last
process is waiting for the resource which is being held by the first process.
Prevention:
Avoid Circular Wait: Assign a specific order to resources, and processes must request them
in that order.
No Hold and Wait: A process must request all required resources at once or release them
and try again later.
Allow Preemption: If a process is stuck, the OS can take away its resources and give them to
others.
Deadlock Detection: Periodically check if deadlock happens and take action, like aborting
processes or preempting resources.
Starvation is a situation where the low priority process got blocked and the high priority
processes proceed.
Prevention:
Fair Resource Allocation: Use scheduling algorithms like Round Robin that ensure every
process gets a fair share of resources.
Aging: Increase the priority of processes waiting too long, so they eventually get resources.
Race condition occurs when two or more processes or threads try to access and modify
shared data at the same time, leading to unpredictable results.
For example, if two threads update a shared counter simultaneously without proper
synchronization, the final value can be incorrect.
Race conditions are prevented using tools like mutexes or semaphores to control access to
shared resources.
Example of a Race Condition:
Imagine a ticket booking system:
1. Two users (threads) try to book the last available ticket simultaneously.
2. Both check that 1 ticket is available at the same time.
3. Both proceed to book it.
Result: The system ends up allocating the same ticket to both users, leading to incorrect
behavior.
Kernel:
The kernel is the core part of an operating system that manages system resources like
memory, CPU, and input/output devices. It acts as a bridge between hardware and software.
Other part contain User Space shell, system libraries
Monolithic Kernel:
All system services (like memory management, process scheduling, and device
drivers) run in the kernel space.
Pros: Faster execution since all services are in one place.
Cons: More complex and difficult to maintain, as a bug in any part can crash the
entire system.
Microkernel:
Only essential services (like communication, basic memory management) run in the
kernel space. Other services run in user space.
Pros: More modular and easier to maintain. If one service crashes, it doesn't affect
the entire system.
Cons: Slower execution due to more context switching between kernel and user
space.
Key Difference:
Monolithic Kernel: All services in kernel space.
Microkernel: Only basic services in kernel space, others in user space.
Interrupt is a signal sent to the CPU to get its attention and stop its current task. When the
CPU receives an interrupt, it temporarily pauses its current task, saves its progress, and
starts executing a special function called an interrupt handler to address the event or
condition that caused the interrupt (like a user pressing a key or a hardware device needing
attention).
After handling the interrupt, the CPU resumes its previous task. Interrupts allow the system
to respond to important events without waiting for the current task to finish. An example of
an interrupt is when you click the mouse. The mouse sends an interrupt to the CPU, which
stops its current task to process the click and perform the appropriate action (like opening a
program or selecting something). Then, the CPU goes back to its previous task.There are two
main types of interrupts:
1. Hardware Interrupts: Generated by hardware devices (like keyboard, mouse, or timer) to
get the CPU's attention. For example, pressing a key on the keyboard or a mouse click
triggers a hardware interrupt.
2. Software Interrupts: Generated by programs or software. These are used to request
system services from the operating system, like when a program needs to read or write a
file.
Demand paging is a memory management scheme where pages of a process are only
loaded into physical memory when they are needed, or "demanded," by the process, rather
than being loaded all at once at the start.
In simple words, during demand paging, the operating system does not load the entire
program into memory at the beginning. It loads pages one by one as the program requests
them. If a page is not in memory, a page fault occurs, and the required page is loaded from
disk into memory.
In Practice:
Many modern operating systems use a combination of both paging and segmentation,
taking advantage of each technique's strengths. For example, a system might use paging for
physical memory management but use segmentation for organizing the virtual address
space of processes.
This helps in saving memory, as only the necessary pages are kept in RAM, and the rest are
stored on the disk.
System call is made by a program to request services from the operating system. When a
program needs to perform tasks like reading files, accessing hardware, or managing memory,
it cannot do it directly. Instead, it makes a system call to the OS, which then performs the
requested action on behalf of the program.
System calls provide a controlled way for programs to interact with the underlying system
resources.
Multithreading is a technique where a single program is divided into smaller parts called
threads, which can run simultaneously. Each thread represents a separate task within the
program.
In multithreading, multiple threads of the same process share the same resources (like
memory), but they can be executed independently, improving the program's efficiency,
especially on multi-core CPUs.
For example, a web browser might use one thread to load a webpage and another to handle
user input, making the program more responsive.