0% found this document useful (0 votes)
30 views

Unit 1 Operating System

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views

Unit 1 Operating System

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Operating System

An operating system (OS) is a crucial software component that acts as an


intermediary between computer hardware and user applications. It provides a
platform for software to interact with hardware resources while offering users a
convenient interface. In essence, an operating system enables the efficient
execution of programs and the effective utilization of computer resources. It
manages tasks such as process allocation, memory management, and file
organization, creating a stable and user-friendly environment for computing.
Different types of operating systems cater to diverse computing needs, ranging
from personal computers to mobile devices and servers.

Need of operating system


We need an operating system (OS) for several essential reasons:

1. Resource Management: Computers have hardware resources like the CPU,


memory, storage, and input/output devices. The OS manages these resources
efficiently, allocating them to different software applications and ensuring they
are used optimally.

2. Abstraction: Computer hardware is complex and varies among different


devices. The OS provides a layer of abstraction, allowing software developers to
write applications without needing to understand intricate hardware details. This
simplifies software development.

3. File System: Users and applications need a way to store, organize, and retrieve
files from storage devices. The OS provides a file system that manages data
storage, access permissions, and retrieval.

4. Device Management: Computers have various input/output devices like


printers, displays, and network interfaces. The OS manages the communication
between software applications and these devices, ensuring data exchange is
smooth.
5. User Interface: The OS offers user interfaces that allow users to interact with
the computer system. These interfaces can be graphical or command-line based,
providing a means to run programs, manage files, and configure system settings.

6. Security and Protection: The OS enforces security measures to protect the


system from unauthorized access and malicious software. It controls user
authentication, authorization, and implements measures to safeguard data.

7. Error Handling: Computers can encounter errors and exceptions during


operation. The OS handles these situations, preventing complete system failures
and providing error messages to users and developers for troubleshooting.

8. Efficient Resource Utilization: By managing resources effectively and


preventing resource conflicts, an OS ensures that the computer's hardware is used
efficiently, maximizing overall system performance.

Operating system Services


1. Program Execution:
- The OS loads programs into memory and manages their execution.
- It schedules processes on the CPU, ensuring fair access and efficient
utilization.

2. I/O Operations:
- The OS provides services to manage input and output operations to and from
devices.
- It abstracts device-specific details, making it easier for programs to perform
I/O operations.

3. File System Manipulation:


- The OS offers services to create, read, write, and delete files.
- It manages file organization, storage, and access permissions.

4. Communication Services:
- The OS enables inter-process communication (IPC) through mechanisms like
pipes, sockets, and message queues.
- It allows processes to exchange data and synchronize their actions.

5. Error Detection and Handling:


- The OS detects errors, exceptions, and hardware failures.
- It provides error messages to users and developers for troubleshooting.

6. Security and Access Control:


- The OS enforces user authentication and authorization.
- It controls access to resources, ensuring data security and privacy.

7. Memory Management:
- The OS allocates and deallocates memory space for processes.
- It manages memory protection to prevent processes from interfering with
each other.

8. Process Management:
- The OS creates, suspends, resumes, and terminates processes.
- It manages process synchronization and communication.

9. Networking Services:
- The OS provides networking functionalities for communication between
computers over networks.
- It manages network connections, protocols, and data transmission.
10. Timekeeping Services:
- The OS maintains system time and provides timing services to applications.
- It enables programs to measure time intervals and perform time-sensitive
operations.

11. User Interface Services:


- The OS offers user interfaces, such as graphical user interfaces (GUI) or
command-line interfaces (CLI), for user interaction.
- It provides a way for users to run programs, manage files, and configure
system settings.

These services collectively ensure that applications can run smoothly, efficiently
utilize resources, and interact with hardware devices in a standardized and
controlled manner. By providing these services, the operating system simplifies
software development, enhances system stability, and offers a consistent user
experience across different hardware platforms.

Operating system Structures


Sure, here are definitions for each of the mentioned operating system structures:

1. Simple Batch System:


- A simple batch operating system processes a sequence of user-submitted jobs
without user interaction during job execution.
- Users submit jobs with associated data, and the operating system executes
them in sequence without manual intervention.
- This structure is suitable for early computers where jobs were submitted in
batches on punched cards or tapes.

2. Multiprogrammed System:
- A multiprogrammed operating system allows multiple programs to reside in
memory simultaneously.
- While one program is waiting for I/O or another event, the CPU can be
allocated to another program, increasing CPU utilization.
- This structure enhances system efficiency by overlapping CPU and I/O
operations.

3. Time-Sharing System:
- A time-sharing (or multitasking) operating system enables multiple users to
interact with a single computer simultaneously.
- The CPU time is divided into small time slices, and each user or task is given
a fair share of CPU time during these slices.
- Users experience the illusion of having the computer to themselves, even
though they are sharing resources with others.

4. Personal Systems:
- Personal operating systems are designed for single users and their personal
computers.
- They provide a user-friendly graphical user interface (GUI) and support
applications for common tasks like word processing, browsing, and multimedia.
- Examples include Microsoft Windows, macOS, and Linux distributions with
desktop environments.

5. Parallel Distributed Systems:


- Parallel distributed operating systems harness the power of multiple
interconnected computers to solve complex tasks.
- Programs are divided into smaller tasks that can run concurrently on different
machines.
- These systems are used for high-performance computing, scientific
simulations, and data processing.
6. Real-Time Systems:
- Real-time operating systems prioritize tasks based on timing requirements.
- They guarantee that critical tasks meet their deadlines, making them suitable
for time-sensitive applications like robotics, aerospace, and industrial
automation.
- Real-time systems are classified as hard real-time (strict deadlines) or soft
real-time (approximate deadlines).

Process Management

Process Concept:-
The process concept is a fundamental concept in operating systems that
represents the fundamental unit of work in a computer system. It is a key
abstraction that allows an operating system to manage and execute multiple tasks
concurrently and efficiently.

A process can be thought of as an independent program in execution. It consists


of several components, including:

1. Program Code: The actual instructions of the program to be executed, which


are stored in the executable file on disk.

2. Data: The data that the program manipulates while it's running, including
variables, data structures, and so on.

3. Stack: Each process has its own stack, used for managing function calls and
local variables. The stack keeps track of the execution context and is used for
passing arguments and returning values from functions.
4. Heap: The heap is used for dynamic memory allocation during the runtime of
a process. It's where objects and data can be allocated and deallocated at runtime.

5. Execution Context: This includes the program counter (the address of the
next instruction to be executed), registers (used for holding intermediate values
and important data), and other information that defines the state of the process.

The concept of processes allows an operating system to provide the illusion of


concurrency to users and applications. Even though most modern processors can
only execute one instruction at a time, the operating system quickly switches
between different processes, giving the appearance of simultaneous execution.

Key aspects of the process concept include:

1. Process Control Block (PCB): Each process is associated with a data


structure called the Process Control Block. This data structure holds all the
necessary information about a process, including its current state, program
counter, registers, memory allocation, and more. The operating system uses the
PCB to manage and control processes.

2. Process States: Processes can be in different states, such as "running,"


"waiting," "ready," "terminated," etc. These states reflect what the process is
currently doing and how the operating system should manage it.

3. Context Switching: The operating system performs context switching to


switch the CPU from one process to another. During context switching, the
current process's execution state is saved, and the state of the new process is
restored so that it can resume execution.

4. Scheduling: The operating system schedules processes to run on the CPU


based on scheduling algorithms. These algorithms determine which process
should run next, considering factors like priority, time-sharing, fairness, and
more.
5. Inter-Process Communication (IPC): Processes often need to communicate
and share data with each other. Various mechanisms for IPC, such as pipes,
sockets, shared memory, and message queues, enable processes to exchange
information.

Operations on Process
Processes in an operating system undergo various operations that are essential for
their creation, management, execution, and termination. These operations
collectively enable the operating system to effectively multitask and provide a
seamless user experience. Here are the key operations on processes:

1. Process Creation:
- Create Process: This operation involves the creation of a new process by the
operating system. This includes allocating a unique Process ID (PID), setting up
memory space for the process's code, data, stack, and heap, and initializing the
process's context.

2. Process Execution:
- Load Program: The operating system loads the program code and data from
the executable file into memory for execution.
- Set Up Context: The process's context is set up, including initializing
registers, program counter, and stack pointer.
- Execute Instructions: The CPU executes the instructions of the process, one
by one, as dictated by the program counter.

3. Process Scheduling:
- Scheduling: The operating system determines which process should run next
on the CPU using scheduling algorithms. These algorithms take into account
factors like priority, time-sharing, and process state.

4. Process Termination:
- Exit Process: When a process completes its execution or needs to be
terminated for some reason, it goes through an exit process. This involves
releasing resources, closing files, and notifying the operating system of its
termination.
- Release Resources: The process releases all allocated resources, including
memory, open files, and other system resources.
- Clean Up: The operating system performs cleanup tasks related to the
terminated process, such as updating accounting information and deallocating
memory.

5. Process Communication and Synchronization:


- Inter-Process Communication (IPC): Processes often need to communicate
with each other. IPC mechanisms, like pipes, sockets, and shared memory, enable
processes to exchange data and information.
- Synchronization: Processes may need to synchronize their actions to avoid
issues like race conditions. Semaphores, mutexes, and other synchronization
primitives help ensure orderly execution.

6. Process State Transitions:


- State Changes: Processes transition between different states (e.g., running,
waiting, ready) as they execute and interact with the operating system.
- Context Switching: When the operating system switches between processes, it
performs a context switch, which involves saving the current process's context
and loading the context of the next process to run.

7. Process Suspension and Resumption:


- Suspend: A running process can be temporarily suspended, which means it's
put on hold and its state is saved. This is often done to prioritize another process.
- Resume: A suspended process can be resumed from where it was suspended.
Its state is restored, and it continues execution.

8. Process Monitoring and Management:


- Process Control Block (PCB) Updates: The operating system updates the
process's PCB as the process's state changes, resources are allocated, and
execution progresses.
- Resource Allocation and Deallocation: The OS manages resources like
memory and I/O devices, allocating them to processes as needed and releasing
them when processes are done using them.

These operations collectively enable the operating system to effectively manage


multiple processes, allocate resources, maintain process states, and provide
multitasking capabilities to users and applications.

Cooperating Process
Cooperating processes in computer science refer to multiple independent
processes or programs that collaborate to accomplish a common objective. These
processes might need to exchange information, share resources, or synchronize
their actions to ensure smooth execution. Efficient cooperation is achieved
through interprocess communication (IPC) mechanisms such as shared memory
or message passing, which enable data sharing between processes. To avoid
conflicts and maintain data consistency, synchronization techniques like
semaphores or mutexes are employed, ensuring that processes access shared
resources in an organized manner. Managing cooperating processes is essential
for creating efficient and robust software systems, allowing them to work
together harmoniously and accomplish complex tasks through coordinated
efforts.
Two methods used in cooperating process
1. Shared Memory:
Shared memory is an interprocess communication (IPC) method that allows
cooperating processes to communicate by accessing a shared portion of memory.
In this approach, a region of memory is designated as shared, and multiple
processes can read from or write to this memory region. This facilitates data
exchange and communication between processes in a relatively fast and efficient
manner.
How It Works:
- Processes create a shared memory segment and attach to it.
- They can then read and write data directly to this shared memory space.
- Since all processes access the same memory region, care must be taken to
ensure proper synchronization using synchronization primitives like semaphores
or mutexes.
- Shared memory is particularly useful for scenarios where processes need to
exchange large amounts of data, as it avoids the overhead of copying data
between processes.

Advantages:
- Efficient: Direct memory access makes it faster than some other IPC methods.
- Suitable for Large Data: Ideal for sharing extensive data between processes.
- Low Overhead: Avoids copying data between processes, reducing overhead.

Disadvantages:
- Synchronization: Requires careful synchronization to prevent race conditions
and data corruption.
- Complexity: Managing shared memory and synchronization can be complex.

2. Message Passing:
Message passing is another IPC method where cooperating processes
communicate by sending and receiving messages. In this approach, processes
explicitly send messages containing data to each other. The underlying system
takes care of delivering messages to the appropriate destination process.

How It Works:
- Processes use system-provided functions or APIs to send and receive messages.
- Messages can be synchronous (blocking until the message is received) or
asynchronous (sending and moving on).
- The system handles routing and delivery of messages between processes.
- Message passing typically ensures better isolation between processes, as they
interact through well-defined interfaces.

Advantages:
- Simplicity: Provides a structured way for processes to communicate.
- Isolation: Processes are shielded from each other's internal details.
- Controlled Communication: Explicit messages make it easier to control
communication patterns.

Disadvantages:
- Overhead: Involves overhead in terms of message copying and system calls.
- Limited Data Sharing: Not as efficient as shared memory for large data
exchanges.
- Coordination: Requires synchronization for proper message delivery and order.

Both shared memory and message passing are valuable tools for facilitating
cooperation between processes. The choice between them depends on factors
such as the amount of data being exchanged, the level of isolation needed
between processes, and the complexity of synchronization required.

Threads and Interprocess Communication

Threads and interprocess communications (IPC) are fundamental concepts in


concurrent programming and operating systems, enabling efficient collaboration
between different parts of a program or between separate programs. Threads are
smaller units of execution within a single process, while IPC mechanisms
facilitate communication and data exchange between independent processes.
Threads:
Threads allow a program to perform multiple tasks concurrently within a single
process. Unlike processes, threads share the same memory space, making it
easier for them to communicate and share data. Threads are lighter weight than
processes and can be more efficient for tasks that require coordination or
parallelism. Threads within a process can directly access shared variables and
data structures, making communication between them relatively straightforward.
However, care must be taken to ensure proper synchronization to prevent data
corruption or race conditions.

Interprocess Communications (IPC):

IPC mechanisms are essential when cooperation is needed between independent


processes running in separate memory spaces. These mechanisms facilitate
communication, synchronization, and data sharing. Shared memory is a common
IPC method where processes access the same memory region for data exchange,
though synchronization becomes crucial. Message passing is another approach,
involving processes sending and receiving messages through system-provided
mechanisms. IPC allows different processes to collaborate, exchange
information, and work on tasks collectively, enabling the creation of complex and
distributed applications.

In summary, threads are suitable for concurrent tasks within a single process due
to their shared memory space, while IPC methods are vital for communication
and collaboration between separate processes. The choice between threads and
IPC depends on the nature of the tasks, the degree of isolation required, and the
efficiency considerations of data sharing and coordination.

You might also like