0% found this document useful (0 votes)
29 views

Presentation 9

This document discusses several key concepts in operating system theory: 1. Multiprogramming systems allow multiple programs to run concurrently by rapidly switching the CPU between them. This improves resource utilization. 2. Multitasking extends multiprogramming by providing tasks the appearance of parallel execution through rapid context switching. 3. Multiprocessor systems use multiple CPUs working together through techniques like load balancing and symmetric multiprocessing.

Uploaded by

Haadi Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views

Presentation 9

This document discusses several key concepts in operating system theory: 1. Multiprogramming systems allow multiple programs to run concurrently by rapidly switching the CPU between them. This improves resource utilization. 2. Multitasking extends multiprogramming by providing tasks the appearance of parallel execution through rapid context switching. 3. Multiprocessor systems use multiple CPUs working together through techniques like load balancing and symmetric multiprocessing.

Uploaded by

Haadi Khan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Operating System Theory

Assignment #02 Part II


Name : Muhammad Rohaan khan
Roll No : BSSE / SP-22 / 037
Section : B
Teacher : Mam Kiran Amjad
• Question no : 2
Ans). a) Multiprogramming Systems:-
Multiprogramming is a technique in operating systems where multiple programs are loaded into main memory
at the same time, and the CPU is switched between them in order to keep it busy. The primary goal of multiprogramming is to maximize CPU
utilization.
 Architecture:
 Batch Processing: In a multiprogramming system, a set of programs are loaded into the main memory. The operating system scheduler selects a
program to execute based on certain criteria (e.g., priority, arrival time). Once a program is scheduled, the CPU executes it until an interruption
occurs (such as I/O operation or completion), and then another program is scheduled.
 Overlapping I/O and CPU Operations: Multiprogramming allows the CPU to switch to another program while one program is waiting for I/O
operations to complete. This overlap of I/O and CPU operations increases overall system efficiency.
 Efficient Utilization of Resources: The goal of multiprogramming is to keep the CPU and other system resources busy, reducing idle time and
improving the overall efficiency of the system.

b) Multitasking Systems:-
Multitasking is an extension of multiprogramming that provides multiple tasks (processes or threads) the appearance of
running concurrently. It involves switching the CPU between tasks rapidly to give the illusion of parallel execution.
 Architecture:
 Task Scheduling: Multitasking involves scheduling tasks in a way that each task gets a share of CPU time. This can be achieved through techniques
like time slicing, where each task is given a small time quantum to execute before switching to the next task.
 Context Switching: The operating system must save the state of a task when switching to another task and then restore that state when switching
back. This process is called context switching. Efficient context switching is crucial for the smooth operation of multitasking systems.
 User Interface Responsiveness: Multitasking is often associated with graphical user interfaces (GUIs), where multiple applications or processes
can run concurrently, and users can switch between them seamlessly.
c) Multiprocessor Systems:-
A multiprocessor system is an operating system architecture that involves multiple processors (or CPUs) working together to
execute tasks. This is also known as parallel processing.
 Architecture:
 Symmetric Multiprocessing (SMP): In SMP systems, all processors are treated equally and have equal access to the main memory. The operating
system divides the tasks among the processors, and each processor executes its portion of the task concurrently.
 Asymmetric Multiprocessing (AMP): In AMP systems, one processor (the master) controls the system, and the other processors (slaves) perform
specific tasks assigned by the master. The master processor is responsible for overall system management.
 Load Balancing: In multiprocessor systems, load balancing is crucial to ensure that tasks are distributed evenly among processors. This helps in
achieving better performance and utilization of system resources.
 Fault Tolerance: Multiprocessor systems can be designed to be fault-tolerant. If one processor fails, the tasks assigned to that processor can be
rerouted to other processors, ensuring continued operation.

• Question no : 3
Ans). Process Control Block (PCB):-
A Process Control Block (PCB), also known as a Task Control Block or Task Descriptor, is a data structure in the operating system kernel that contains
information about a specific process. The PCB is created and maintained by the operating system to manage each individual process. It serves as a
central repository of information related to the state of a process and facilitates the operating system's ability to control and manage processes
effectively.
Usage: The PCB is used in the context of process management within an operating system. It is created when a process is first initiated and is updated
throughout the lifetime of the process. The operating system uses the PCB to keep track of various attributes and states of a process, allowing for
efficient scheduling, resource management, and context switching.
Contents of PCB: The contents of a PCB may vary slightly based on the operating system, but typically, it includes the following information:
1. Process ID (PID): A unique identifier for the process.
2. Program Counter (PC): The address of the next instruction to be executed.
3. Processor Registers: Contents of various processor registers.
4. CPU Scheduling Information: Information about the process's priority, scheduling state, etc.
5. Memory Management Information: Information about the process's memory allocation, such as page tables or segment tables.
6. I/O Status Information: The status of open files, I/O devices in use, etc.
7. Accounting Information: Resource usage statistics like CPU time, clock time, etc.
8. Process State: The current state of the process (running, ready, blocked, etc.).

 Process State Diagram:-


• New : The process is being created.
• Ready : The process is ready to execute but is waiting for the CPU.
• Running : The process is currently being executed by the CPU.
• Waiting : The process is waiting for some event or resource (e.g., I/O operation).
• Terminated : The process has completed its execution.
 Processes transition between these states based on various events. For example:
A new process moves to the ready state when it's ready to run.
A running process may move to the waiting state if it needs to wait for an I/O operation to complete.
A process in the ready state moves to the running state when the CPU is allocated to it.

• Question no : 4
Ans). System Calls:-
A system call is a programming interface provided by the operating system to allow applications to request services from the operating system's kernel.
These services include low-level operations such as input/output operations, process control, file management, memory allocation, and communication between processes.
System calls provide a bridge between user-level programs and the kernel, enabling user programs to interact with hardware and other system resources.
 Categories of System Calls:-
o Process Control:
• fork(): Create a new process.
• exit(): Terminate the calling process.
• wait(): Wait for the child process to terminate.
o File Management:
• open(): Open a file.
• read(): Read data from a file.
• write(): Write data to a file.
o Device Management:
• ioctl(): Control device parameters.
• read(): Read data from a device.
• write(): Write data to a device.

o Information Maintenance:
• getpid(): Get the process ID of the calling process.
• time(): Get the current time.

o Communication:
• pipe(): Create an inter-process communication channel.
• shmget(): Get a shared memory segment.

o Memory Management:
• brk(): Change the data segment size.
• mmap(): Map files or devices into memory.

 Handling System Calls in the OS:


When a program makes a system call, it transitions from user mode to kernel mode. The operating system provides a specific mechanism for handling
system calls, typically involving the following steps:
 System Call Invocation:
The user program makes a system call by invoking a specific function or instruction, often provided by a system call library.
 Trap/Interrupt:
The system call instruction triggers a trap or interrupt, causing the CPU to switch from user mode to kernel mode.
 Kernel Mode Execution:
The control is transferred to a predefined location in the kernel, and the kernel performs the requested operation on behalf of the user program.
 Parameter Passing:
Parameters for the system call are passed from the user space to the kernel space. This can be done through registers or a specific memory location.
 System Call Execution:
The kernel executes the requested operation, whether it's reading from a file, allocating memory, or performing any other service.
 Return to User Mode:
Once the system call is executed, control is returned to the user program in user mode, and the results of the system call are made available.

• Question no : 5
Ans). Dual-Mode Operation in Operating Systems:-
• Concept: Dual-mode operation, also known as dual privilege or dual-ring architecture, is a feature in modern operating systems designed to
enhance the security and stability of the system. It involves two distinct privilege levels or modes of operation: User Mode and Kernel Mode (also
known as Supervisor Mode, System Mode, or Privileged Mode).
o Modes of Operation:-
 User Mode:
In this mode, applications and user-level processes run with restricted access to system resources. User mode provides a protective barrier to prevent
user programs from directly accessing critical hardware and system resources.
 Kernel Mode:
Kernel mode is a privileged mode that allows the operating system's kernel to have unrestricted access to the entire system. In this mode, the
operating system can execute privileged instructions and access sensitive hardware resources.
o Enhancing Security and Stability:-
 Security:
User programs run in User Mode, limiting their ability to directly manipulate or access system resources. Unauthorized access to critical system
components is restricted. Only the operating system's kernel, running in Kernel Mode, has the authority to execute privileged instructions and control
hardware resources.
 Stability:
Separating user-level processes from the kernel helps prevent unintentional or malicious interference with critical system components. In the event of
a fault or error in a user program, the impact is confined to the user mode, minimizing the risk of crashing the entire system.
 Example: Consider a scenario where a user program attempts to write data directly to a specific memory location.
In a Dual-Mode architecture:
User Mode Operation: The user program runs in User Mode, and any attempt to write directly to a protected memory location will result in a privilege
violation or exception.
Kernel Mode Operation: The operating system's kernel, running in Kernel Mode, is responsible for handling memory management and ensuring the
integrity of the system. When a user program needs to write data to a specific memory location, it makes a system call. The transition to Kernel Mode
occurs to execute the privileged operation.
o Transition between Modes:-
 Mode Switching:
A mode switch occurs when transitioning from User Mode to Kernel Mode . System calls or exceptions trigger a mode switch. For example, when a
user program invokes a system call, it transitions to Kernel Mode to allow the operating system to perform the requested operation.
 Controlled Transition:
The transition between modes is controlled by hardware mechanisms, such as the processor's interrupt and trap mechanisms. The operating system
decides when to allow user programs to execute privileged operations, ensuring controlled access to sensitive resources.

You might also like