Unit 1 Operating System
Unit 1 Operating System
3. File System: Users and applications need a way to store, organize, and retrieve
files from storage devices. The OS provides a file system that manages data
storage, access permissions, and retrieval.
2. I/O Operations:
- The OS provides services to manage input and output operations to and from
devices.
- It abstracts device-specific details, making it easier for programs to perform
I/O operations.
4. Communication Services:
- The OS enables inter-process communication (IPC) through mechanisms like
pipes, sockets, and message queues.
- It allows processes to exchange data and synchronize their actions.
7. Memory Management:
- The OS allocates and deallocates memory space for processes.
- It manages memory protection to prevent processes from interfering with
each other.
8. Process Management:
- The OS creates, suspends, resumes, and terminates processes.
- It manages process synchronization and communication.
9. Networking Services:
- The OS provides networking functionalities for communication between
computers over networks.
- It manages network connections, protocols, and data transmission.
10. Timekeeping Services:
- The OS maintains system time and provides timing services to applications.
- It enables programs to measure time intervals and perform time-sensitive
operations.
These services collectively ensure that applications can run smoothly, efficiently
utilize resources, and interact with hardware devices in a standardized and
controlled manner. By providing these services, the operating system simplifies
software development, enhances system stability, and offers a consistent user
experience across different hardware platforms.
2. Multiprogrammed System:
- A multiprogrammed operating system allows multiple programs to reside in
memory simultaneously.
- While one program is waiting for I/O or another event, the CPU can be
allocated to another program, increasing CPU utilization.
- This structure enhances system efficiency by overlapping CPU and I/O
operations.
3. Time-Sharing System:
- A time-sharing (or multitasking) operating system enables multiple users to
interact with a single computer simultaneously.
- The CPU time is divided into small time slices, and each user or task is given
a fair share of CPU time during these slices.
- Users experience the illusion of having the computer to themselves, even
though they are sharing resources with others.
4. Personal Systems:
- Personal operating systems are designed for single users and their personal
computers.
- They provide a user-friendly graphical user interface (GUI) and support
applications for common tasks like word processing, browsing, and multimedia.
- Examples include Microsoft Windows, macOS, and Linux distributions with
desktop environments.
Process Management
Process Concept:-
The process concept is a fundamental concept in operating systems that
represents the fundamental unit of work in a computer system. It is a key
abstraction that allows an operating system to manage and execute multiple tasks
concurrently and efficiently.
2. Data: The data that the program manipulates while it's running, including
variables, data structures, and so on.
3. Stack: Each process has its own stack, used for managing function calls and
local variables. The stack keeps track of the execution context and is used for
passing arguments and returning values from functions.
4. Heap: The heap is used for dynamic memory allocation during the runtime of
a process. It's where objects and data can be allocated and deallocated at runtime.
5. Execution Context: This includes the program counter (the address of the
next instruction to be executed), registers (used for holding intermediate values
and important data), and other information that defines the state of the process.
Operations on Process
Processes in an operating system undergo various operations that are essential for
their creation, management, execution, and termination. These operations
collectively enable the operating system to effectively multitask and provide a
seamless user experience. Here are the key operations on processes:
1. Process Creation:
- Create Process: This operation involves the creation of a new process by the
operating system. This includes allocating a unique Process ID (PID), setting up
memory space for the process's code, data, stack, and heap, and initializing the
process's context.
2. Process Execution:
- Load Program: The operating system loads the program code and data from
the executable file into memory for execution.
- Set Up Context: The process's context is set up, including initializing
registers, program counter, and stack pointer.
- Execute Instructions: The CPU executes the instructions of the process, one
by one, as dictated by the program counter.
3. Process Scheduling:
- Scheduling: The operating system determines which process should run next
on the CPU using scheduling algorithms. These algorithms take into account
factors like priority, time-sharing, and process state.
4. Process Termination:
- Exit Process: When a process completes its execution or needs to be
terminated for some reason, it goes through an exit process. This involves
releasing resources, closing files, and notifying the operating system of its
termination.
- Release Resources: The process releases all allocated resources, including
memory, open files, and other system resources.
- Clean Up: The operating system performs cleanup tasks related to the
terminated process, such as updating accounting information and deallocating
memory.
Cooperating Process
Cooperating processes in computer science refer to multiple independent
processes or programs that collaborate to accomplish a common objective. These
processes might need to exchange information, share resources, or synchronize
their actions to ensure smooth execution. Efficient cooperation is achieved
through interprocess communication (IPC) mechanisms such as shared memory
or message passing, which enable data sharing between processes. To avoid
conflicts and maintain data consistency, synchronization techniques like
semaphores or mutexes are employed, ensuring that processes access shared
resources in an organized manner. Managing cooperating processes is essential
for creating efficient and robust software systems, allowing them to work
together harmoniously and accomplish complex tasks through coordinated
efforts.
Two methods used in cooperating process
1. Shared Memory:
Shared memory is an interprocess communication (IPC) method that allows
cooperating processes to communicate by accessing a shared portion of memory.
In this approach, a region of memory is designated as shared, and multiple
processes can read from or write to this memory region. This facilitates data
exchange and communication between processes in a relatively fast and efficient
manner.
How It Works:
- Processes create a shared memory segment and attach to it.
- They can then read and write data directly to this shared memory space.
- Since all processes access the same memory region, care must be taken to
ensure proper synchronization using synchronization primitives like semaphores
or mutexes.
- Shared memory is particularly useful for scenarios where processes need to
exchange large amounts of data, as it avoids the overhead of copying data
between processes.
Advantages:
- Efficient: Direct memory access makes it faster than some other IPC methods.
- Suitable for Large Data: Ideal for sharing extensive data between processes.
- Low Overhead: Avoids copying data between processes, reducing overhead.
Disadvantages:
- Synchronization: Requires careful synchronization to prevent race conditions
and data corruption.
- Complexity: Managing shared memory and synchronization can be complex.
2. Message Passing:
Message passing is another IPC method where cooperating processes
communicate by sending and receiving messages. In this approach, processes
explicitly send messages containing data to each other. The underlying system
takes care of delivering messages to the appropriate destination process.
How It Works:
- Processes use system-provided functions or APIs to send and receive messages.
- Messages can be synchronous (blocking until the message is received) or
asynchronous (sending and moving on).
- The system handles routing and delivery of messages between processes.
- Message passing typically ensures better isolation between processes, as they
interact through well-defined interfaces.
Advantages:
- Simplicity: Provides a structured way for processes to communicate.
- Isolation: Processes are shielded from each other's internal details.
- Controlled Communication: Explicit messages make it easier to control
communication patterns.
Disadvantages:
- Overhead: Involves overhead in terms of message copying and system calls.
- Limited Data Sharing: Not as efficient as shared memory for large data
exchanges.
- Coordination: Requires synchronization for proper message delivery and order.
Both shared memory and message passing are valuable tools for facilitating
cooperation between processes. The choice between them depends on factors
such as the amount of data being exchanged, the level of isolation needed
between processes, and the complexity of synchronization required.
In summary, threads are suitable for concurrent tasks within a single process due
to their shared memory space, while IPC methods are vital for communication
and collaboration between separate processes. The choice between threads and
IPC depends on the nature of the tasks, the degree of isolation required, and the
efficiency considerations of data sharing and coordination.