Unit 3 Mid 2
Unit 3 Mid 2
Kernel-level processes (or threads) are managed by the operating system's kernel,
meaning the kernel is responsible for scheduling, synchronization, and process
management.
Advantages:
Better Blocking Handling: If one thread blocks (e.g., on I/O), the kernel can still
schedule other threads within the same process. This improves overall responsiveness
and efficiency.
Access to System Resources: Kernel-level threads have direct access to system calls
and resources, making them more capable of performing low-level tasks.
Automatic Load Balancing: The kernel can distribute threads across multiple CPUs,
providing better automatic load balancing.
Preemptive Multitasking: The kernel can preempt tasks when necessary, offering
better control over process scheduling.
Disadvantages:
Higher Overhead: The kernel needs to manage and schedule threads, which adds
overhead in context switching, kernel data structures, and kernel calls.
2. User-Level Processes
User-level processes (or threads) are managed entirely by user-space libraries, without
kernel intervention in scheduling or synchronization.
Advantages:
Lower Overhead: Since user-level processes do not require kernel intervention for
context switching, the overhead is significantly reduced. Switching between threads is
faster since it does not involve kernel mode transitions.
Faster Creation and Destruction: User-level threads are generally faster to create and
destroy because they don't need kernel involvement.
Disadvantages:
Blocking Issues: If one user-level thread makes a blocking system call (e.g., for I/O), it
may block the entire process, as the kernel does not differentiate between the threads.
This can degrade performance.
No True Parallelism: In many cases, all user-level threads are executed on a single
kernel-level thread, meaning that only one thread can run at a time. This prevents true
parallelism on multicore systems.
Manual Yielding and Scheduling: User-level threads rely on the user space for
scheduling, which means the programmer has to manually ensure fairness and
efficiency in thread execution.
2.Resource Allocation:
The operating system (or the distributed system’s resource manager) allocates the
necessary resources for the new process. These resources typically include memory,
CPU time, I/O devices, file descriptors, and communication links.
The OS creates a Process Control Block (PCB), which contains all the necessary
information about the new process, such as its process ID, state, priority, program
counter, and resource allocation.
For distributed systems, the PCB may also include information about the node on
which the process is running and inter-process communication (IPC) links to other
processes on different nodes.
The OS or middleware sets up the new process context, initializing registers, setting
up memory, and preparing the program counter (the address of the next instruction
to be executed).
The new process is added to the system's scheduling queue, awaiting execution based
on the scheduling policy.
6.Execution Begins:
Once all resources are allocated and the context is set up, the process begins
execution. In distributed systems, the process could also establish communication links
with processes on other nodes.
Process Life Cycle in Distributed Systems
Once a process is created, it goes through a series of states during its lifetime,
commonly referred to as the process life cycle.
1.New (Creation):
The process is being created, and the OS is setting up its initial environment (memory
allocation, creating the PCB, initializing process state). At this point, the process is not
yet ready to execute.
2.Ready:
Once the process is initialized, it enters the ready state, waiting for CPU time. It is now
in the ready queue and can be scheduled to run.
3.Running:
The process moves to the running state when it is assigned CPU time by the scheduler.
In this state, the CPU executes the instructions of the process.
In distributed systems, the process may communicate with other processes across
nodes via message passing, shared memory, or other inter-process communication
(IPC) mechanisms.
Processes in a distributed system might also execute in parallel across different nodes
if they are designed to support parallelism.
4.Waiting (Blocked):
A process enters the waiting state if it requires I/O operations, external events, or
communication with other processes. It will remain in this state until the condition is
met (e.g., an I/O operation completes or a message is received).
In distributed systems, waiting may involve waiting for messages or responses from
processes running on remote nodes.
5.Terminated (Exit):
After the process has completed execution or has been explicitly terminated, it moves
to the terminated state.
In distributed systems, termination may also involve notifying other nodes that the
process has exited, releasing distributed resources, and cleaning up the process state
across multiple nodes.
The PCB is removed, and any resources (such as memory or communication links) are
released.
Prevent Misuse and Damage: Protection mechanisms ensure that only authorized
users or processes can access or modify certain resources. This prevents accidental or
intentional damage to system resources.
Data Privacy and Confidentiality: Protection mechanisms help maintain the privacy
of user data by preventing unauthorized access to sensitive information, such as
personal files or confidential data.
Enforce Resource Allocation Policies: Protection ensures that system resources (CPU,
memory, I/O devices) are allocated fairly and that one process does not interfere with
another's resources.
System Stability and Integrity: Proper protection helps maintain the stability and
integrity of the system by preventing unauthorized modifications to critical system
files, system configurations, or OS kernel code.
Memory protection ensures that one process cannot access or modify another
process’s memory space or the memory used by the OS itself.
Base and Limit Registers: These registers define the starting and ending addresses of
a process's memory space. The OS uses them to prevent a process from accessing
memory outside of its allocated range. Any attempt to access memory outside this
range results in an error or exception.
Segmentation: Memory is divided into logical segments (code, data, stack, etc.), and
each segment is associated with protection attributes, such as read, write, or execute
permissions. Processes are restricted to operate only within their assigned segments.
Paging: Paging divides memory into fixed-size blocks (pages), and processes are
assigned pages of memory. Each page can have its protection attributes, such as read,
write, or execute, and the OS manages which pages are accessible to each process.
This helps prevent processes from accessing each other's memory.
CPU Protection
CPU protection ensures that processes do not monopolize the CPU or cause the
system to hang indefinitely.
Timer Interrupts: A timer can be set to trigger an interrupt after a certain time has
passed. This ensures that the OS can regain control of the CPU and prevent any process
from running indefinitely (e.g., in an infinite loop).
Preemptive Scheduling: The OS can forcibly preempt processes that have exceeded
their allotted time slice, allowing other processes to use the CPU.
Access Control Lists (ACLs): ACLs specify which users or processes can access certain
files and what operations (read, write, execute) they are allowed to perform. Each file
or directory has an associated ACL that defines its permissions.
File Permissions: In many OSs (such as Unix/Linux), files are associated with three
types of permissions: read (r), write (w), and execute (x). These permissions are typically
divided into three categories: user (owner), group, and others, allowing different access
levels based on identity.
Encryption: Files can be encrypted to prevent unauthorized access, even if someone
manages to bypass access controls. Only users or processes with the correct
decryption key can read or modify the file.
Role-Based Access Control (RBAC): Permissions are assigned based on the roles
users have within an organization (e.g., admin, user, guest). This makes it easier to
manage permissions for large numbers of users.
I/O Port Access Control: The OS restricts direct access to I/O devices by using
privileged instructions. Only the OS or trusted device drivers can perform I/O
operations, ensuring that unauthorized processes cannot manipulate hardware
directly.
Spooling and Queuing: For devices like printers, the OS uses spooling (Simultaneous
Peripheral Operations On-Line), which queues I/O requests and ensures that only one
process at a time is allowed to use the device. This prevents conflicts when multiple
processes request access simultaneously.
Protection in IPC ensures that only authorized processes can communicate with each
other and prevents processes from reading or writing messages they are not permitted
to access.
Message Passing: The OS controls which processes can send or receive messages
from each other, ensuring that only authorized processes participate in
communication.
Shared Memory: If shared memory is used for IPC, the OS ensures that processes only
access the regions of memory that they are authorized to use. Memory protection
mechanisms like paging or segmentation can also be applied here.
Kernel Protection
Kernel protection ensures that user processes cannot interfere with the functioning of
the OS kernel.
System Calls: User processes interact with the kernel using system calls. The OS
provides specific system call interfaces for accessing hardware, files, and memory,
which helps ensure that user processes can only interact with the kernel in a controlled
and secure manner.
Network Protection
Encryption Protocols: Protocols such as SSL/TLS, VPNs, and IPSec ensure that data
transmitted over the network is encrypted and secure from eavesdropping or
tampering.
Protection
Purpose
Mechanism
I/O Device Prevent conflicts in device access and ensure safe interaction
Protection with devices.
Effective protection mechanisms are vital for ensuring the security, stability, and proper
functioning of an operating system and the resources it manages.
Layer 2: CPU Scheduling, which is part of the OS's core functions. This layer ensures
that the CPU is allocated to processes efficiently, determining which process should be
executed and when. CPU scheduling is critical for ensuring system performance and
fairness, preventing any single process from monopolizing CPU time.
Moving outward, Layer 5: I/O Buffer Management handles the interaction between
processes and input/output devices such as printers, disk drives, and network
interfaces. It manages data buffering, spooling, and caching to ensure efficient data
transfer and prevent bottlenecks in I/O operations.
Finally, Layer 6: User Programs represents the applications and processes that users
interact with directly, such as web browsers, text editors, and games. This layer
abstracts the complexities of the lower layers, providing users with a simple and
intuitive interface to interact with the system. These programs rely on system calls to
request services from the OS, such as file manipulation or process control.
The layered architecture of the OS provides a modular and organized way of handling
the system’s complexity. Each layer builds upon the one below it, providing a clear
separation of responsibilities while maintaining abstraction. This structure enhances
security, maintainability, and system performance by allowing each layer to focus on
specific tasks while interacting with adjacent layers in a controlled manner.
5.) Demonstrate the architecture for multi-threaded servers.
Client Requests:
The diagram shows multiple users (User 1, User 2, ... User n) sending requests to the
server. These users could be web browsers, mobile apps, or any other clients
interacting with the server. In a typical server architecture, each user request needs to
be processed independently.
The server in the diagram is multi-threaded, meaning it can create multiple threads to
handle incoming requests. When a user sends a request, the server spawns a separate
thread (e.g., Thread 1 for User 1, Thread 2 for User 2, and so on) to handle each user’s
request. This parallel processing ensures that multiple users can be served
simultaneously without waiting for others to complete their operations.
Threads: Each thread runs independently within the server’s process space, meaning
that the server can manage several tasks concurrently. These threads share the same
resources such as memory and file handles but can execute their specific operations
in parallel. This avoids the need to create separate processes, which are more resource-
intensive compared to threads.
Interaction with Database:
The diagram shows that each thread interacts with a shared resource, in this case, a
database. Each thread independently handles database operations, such as querying,
updating, or retrieving data for the respective client request. Since the database is
shared among all threads, synchronization mechanisms (like locks or semaphores) are
often necessary to prevent data corruption or conflicts when multiple threads attempt
to access the same data.
Concurrency: Ensuring that multiple threads can execute in parallel without causing
conflicts or errors in the program's logic. This can lead to complexities such as race
conditions, where the outcome of operations depends on the order of execution,
leading to unpredictable behavior.
Race Conditions: Occurs when two or more threads modify shared data concurrently,
resulting in inconsistent or incorrect data. It requires careful management of shared
resources using synchronization mechanisms like locks, semaphores, or atomic
operations.
Deadlocks: Occurs when two or more threads are waiting on each other to release
resources, leading to a situation where none of the threads can proceed. Deadlock
detection and prevention strategies (like resource ordering or timeouts) must be
implemented to avoid this.
Livelocks: Similar to deadlocks but here, threads keep changing their state in response
to other threads without making progress. This is usually caused by threads reacting
too much to each other’s state changes.
Zombie Threads: If a parent thread does not properly handle the termination of child
threads, it can result in zombie threads, which occupy system resources despite having
completed execution.
Mutual Exclusion: Threads that access shared resources must do so in a way that
prevents race conditions. Mechanisms like locks, mutexes, or semaphores are used,
but they can introduce overhead or potential deadlock if not managed properly.
Deadlocks: Improper lock ordering or dependency among multiple locks can cause
deadlocks, where two or more threads are waiting indefinitely for resources held by
each other.
Priority Inversion: This occurs when a lower-priority thread holds a resource needed
by a higher-priority thread, causing the higher-priority thread to wait unnecessarily.
Priority inheritance protocols can be used to mitigate this issue.
Fairness: The scheduler must ensure that all threads get a fair share of CPU time. Poor
scheduling algorithms can cause thread starvation, where low-priority threads never
get executed.
Thread Prioritization: If threads are assigned different priorities, there may be issues
related to priority inversion, where a high-priority thread is waiting for a low-priority
thread, or priority starvation, where low-priority threads may never get CPU time.
Load Balancing: In multi-core systems, the thread scheduler needs to balance the CPU
load across all available cores efficiently. Poor load balancing can lead to some CPU
cores being underutilized while others are overloaded.
Portability: Different operating systems have varying levels of support and APIs for
threads (e.g., POSIX threads in UNIX, Windows threads). Writing portable multi-
threaded code that works across different platforms can be challenging.
Thread Pools: Efficiently managing a pool of threads that can be reused can save
overhead in thread creation/destruction. However, managing the size of the thread
pool and ensuring the right balance between performance and resource usage is
critical.
Resource Usage: Threads share memory space, so managing memory and other
resources (like file descriptors) is important. Poorly implemented thread programs can
lead to memory leaks, file handle exhaustion, or resource contention.