0% found this document useful (0 votes)
10 views7 pages

Operating-System.Theory.Aporna

Uploaded by

faw faw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views7 pages

Operating-System.Theory.Aporna

Uploaded by

faw faw
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

1. How I/O operations are managed via interrupts.

Answer:
Interrupt Generation: When an I/O device completes a task (like data transfer), it
sends an interrupt signal to the CPU.
● Handling the Interrupt:
a. The CPU pauses its current task and acknowledges the interrupt.
b. It saves the state of the current process (context switching) and determines
which interrupt handler to execute using an interrupt vector table.
● Executing the ISR: The Interrupt Service Routine (ISR) runs, performing necessary
actions (e.g., reading data).
● Restoring Context: After the ISR finishes, the CPU restores the previous process's
state and resumes its execution.
Benefits:
● Efficiency: Reduces CPU idle time since it doesn’t need to poll devices constantly.
● Responsiveness: Quickly handles events, improving performance.
● Simplicity: Allows easier program design without complex polling logic.
2. DMA and its usefulness.
Answer:
Definition: DMA lets devices transfer data directly to/from memory without CPU involvement.
Process:
a. Setup: CPU configures the DMA controller with source/destination addresses and data
size.
b. Transfer: DMA controller takes control of data transfer.
c. Completion: Sends an interrupt to the CPU when done.
Usefulness:
● Efficiency: Reduces CPU workload.
● Speed: Faster data transfers.
● Performance: Improves system performance for data-heavy tasks.
● Low Latency: Reduces delays in data access.
● Multitasking: Frees up CPU for other tasks.
3. Dual mode of Operation.
Answer:
Definition: Dual mode of operation allows an operating system to run in two modes: user mode
and kernel mode.
User Mode:
● Limited access to system resources.
● Prevents direct hardware access to protect the system.
● Programs run with restrictions to ensure stability and security.
Kernel Mode:
● Full access to all system resources.
● Allows execution of critical system functions and hardware control.
● Can execute any CPU instruction and access any memory address.
Purpose:
● Protection: Keeps user applications from interfering with the operating system and each
other.
● Security: Prevents unauthorized access to system resources.
● Stability: Ensures that errors in user programs do not crash the entire system.
4. Timer and it's role.
Answer:
Definition: A timer is a hardware or software component that counts down or measures time
intervals.
Role:
● Process Scheduling: Helps the OS manage time slices for processes, allowing
multitasking.
● Time Sharing: Ensures fair CPU time allocation among multiple processes.
● Timeouts: Detects unresponsive processes by setting time limits for operations.
● Event Scheduling: Triggers periodic tasks or events, such as updating system clocks or
managing resource allocation.
Types:
● Interval Timer: Generates interrupts at regular intervals for scheduling.
● Real-Time Clock (RTC): Keeps track of the current time and date.
5. Multiprogramming and Multitasking.
Answer:
Multiprogramming
● Definition: A technique that allows multiple programs to reside in memory and be
executed by the CPU simultaneously.
● Goal: Maximize CPU utilization by keeping the CPU busy with one program while others
wait for I/O operations.
● Key Feature: Switching between programs when one is blocked (e.g., waiting for I/O).
Multitasking
● Definition: An extension of multiprogramming that allows multiple tasks (processes or
threads) to run concurrently.
● Types:
○ Preemptive Multitasking: The OS can forcibly take control from a running task to
allocate CPU time to another.
○ Cooperative Multitasking: Tasks voluntarily yield control to allow others to run.
● Goal: Improve responsiveness and user experience by allowing several applications to
run simultaneously.
6. Multiprocessing ( symmetric and asymmetric).
Answer:
Definition
● Multiprocessing: Using two or more CPUs to execute processes simultaneously.
Types
1. Symmetric Multiprocessing (SMP):
○ Equal access to shared memory and resources.
○ Any CPU can perform any task.
○ Better load balancing.
2. Asymmetric Multiprocessing (AMP):
○ Master/slave configuration with dedicated tasks.
○ One CPU controls the system; others handle specific jobs.
○ Simpler but less flexible.
7. Learn about PCBs and in which cases they are useful.
Answer:
PCB: A Process Control Block (PCB) is a data structure used by the operating system to store
all the information about a specific process.
1. Process ID (PID): A unique identifier for the process.
2. Process State:
New: The process is being created.
Ready: The process is waiting to be assigned to a CPU; it's ready to run.
Running: The process is currently being executed by the CPU.
Waiting: The process is waiting for an event to occur (e.g., I/O completion).
Terminated: The process has finished execution and is being removed from the system.
3. CPU Registers: Values of the CPU registers when the process is not running, allowing it to
resume later.
4. Memory Management Information: Details about memory allocation (e.g., base and limit
registers, page tables).
5. I/O Status Information: List of I/O devices allocated to the process and their statuses.
6. Scheduling Information: Priority level, scheduling algorithm info, and other relevant data for
process scheduling.
8. Context switch.
Answer:
Definition: Saves the state of a running process and loads the state of another.
Purpose: Enables multitasking by sharing the CPU among processes.
Steps:
1. Save current process state.
2. Update its PCB.
3. Load next process state.
Overhead: Introduces performance overhead.
9. Different types of schedulers (long-term, mid-term and short-term)
Answer:
Long-Term Scheduler:
● Definition: Decides which processes are admitted to the ready queue.
● Frequency: Runs less frequently.
● Purpose: Controls the degree of multiprogramming.
Mid-Term Scheduler:
● Definition: Temporarily removes processes from memory to reduce load.
● Frequency: Runs occasionally.
● Purpose: Swaps processes in and out of memory (e.g., for balancing).
Short-Term Scheduler (or CPU Scheduler):
● Definition: Selects which process from the ready queue to execute next.
● Frequency: Runs frequently (milliseconds).
● Purpose: Manages CPU allocation for efficiency and responsiveness.
10. How processes are created and terminated (By which system calls)
Answer:
Process Creation
● System Call: fork()
○ Creates a new process (child).
● System Call: exec()
○ Replaces the child process with a new program.
Process Termination
● System Call: exit()
○ Ends the calling process.
● System Call: wait()
○ Parents wait for child processes to finish.
11. Zombie and Orphan processes and how OS handles them
Answer:
Zombie Process
● Definition: A finished process still listed in the process table.
● Cause: Parent hasn’t called wait().
● Handling: OS cleans up when the parent calls wait() or exits.
Orphan Process
● Definition: A process with a terminated parent.
● Handling: Reassigned to the init process (PID 1), which cleans it up.
12. Message passing and Shared memory communication
Answer:
Message Passing
● Definition: Processes communicate by sending and receiving messages.
● Mechanism: Uses system calls like send() and receive().
● Advantages:
○ Simple and easy to use.
○ No need for synchronization; messages are managed by the OS.
● Disadvantages:
○ Can be slower due to overhead in message handling.
Shared Memory
● Definition: Processes communicate by accessing a common memory space.
● Mechanism: Uses shared memory segments created by system calls like shmget().
● Advantages:
○ Fast communication since data is accessed directly.
○ Efficient for large data transfer.
● Disadvantages:
○ Requires synchronization to prevent data inconsistency (e.g., using
semaphores).
13. In message passing learn about direct and indirect communication
Answer:
Direct Communication
● Definition: Processes send messages straight to each other.
● Addressing: Both sender and receiver must be known.
● Prosess: Simple and quick.
● Consequence: Needs coordination between sender and receiver.
Indirect Communication
● Definition: Messages go through a shared queue or mailbox.
● Addressing: Processes don’t need to know each other.
● Prosess: Flexible and can work without direct coordination.
● Consequence: Can have extra delays due to the queue
14. Sockets.
Answer:
Definition: Endpoints for network communication.
Types:
● Stream Sockets (TCP): Reliable, connection-oriented.
● Datagram Sockets (UDP): Fast, connectionless.
Components:
● IP Address: Device identifier.
● Port Number: Process identifier.
Usage: Facilitate client-server communication in applications like web browsers and chat
programs.

15. How RPC is performed and the individual role of each module in RPC
Answer:
Definition
● RPC: Allows a program to execute a procedure on a remote server as if local.
Steps in RPC
1. Client Call: Client calls a local stub.
2. Client Stub: Packs parameters and sends a message to the server.
3. Network: Transmits the message to the server.
4. Server Stub: Unpacks parameters and calls the server procedure.
5. Server Execution: Server processes the request and generates a result.
6. Server Stub: Sends the result back to the client.
7. Client Stub: Unpacks the result and returns it to the client.
Role of Each Module
● Client Application: Initiates the call.
● Client Stub: Manages communication.
● Network: Transfers messages.
● Server Stub: Invokes server procedures.
● Server Application: Executes the procedure.

You might also like