os imp
os imp
1. Global Table
A single table contains all access rights, listing the subject, object, and the permissions.
Example:
mathematica
Copy code
Subject Object Access Rights
User1 File1 Read, Write
User2 File2 Read
Pros: Simple to implement.
Cons: Hard to manage for large systems, as it can become a bottleneck.
3. Capability Lists
Each subject maintains a list of objects it can access and the corresponding rights.
Example for User1: User1: File1 - Read, Write; File3 - Read
Pros: Easy to check permissions for a subject.
Cons: Difficult to view all access rights for an object.
4. Hybrid Approach
Combines ACLs and Capability Lists to balance the trade-offs.
Example: ACLs for frequently accessed objects and Capability Lists for active subjects.
Key Considerations:
Efficiency: Choose data structures that minimize lookup and storage overhead.
Scalability: Ensure the system can handle a growing number of users and resources.
Security: Protect access control data from unauthorized modifications.
These methods ensure the Access Matrix is implemented efficiently while maintaining security
and usability.
Both types of virtual machines have unique advantages and are implemented in scenarios that
optimize hardware usage or enhance application compatibility.
8. Goals of Security
1. Confidentiality
o Ensures that information is accessible only to authorized users.
o Protects sensitive data from being disclosed to unauthorized entities.
o Example: Encrypting messages to prevent interception.
2. Integrity
o Ensures data is accurate and unaltered.
o Protects against unauthorized modifications, whether accidental or malicious.
o Example: Checksums and cryptographic hash functions ensure file integrity.
3. Availability
o Ensures that resources and services are available to authorized users when needed.
o Protects against disruptions caused by attacks like denial-of-service (DoS).
o Example: Backup systems and redundancy mechanisms.
4. Authentication
o Verifies the identity of users or systems accessing resources.
o Ensures that only legitimate entities can gain access.
o Example: Login credentials, biometric scans.
5. Non-repudiation
o Ensures that parties in a communication cannot deny their actions.
o Provides proof of data origin and receipt.
o Example: Digital signatures in emails.
Principles of Security
1. Least Privilege
o Users and processes should have the minimum level of access necessary to perform their
tasks.
o Reduces the impact of potential breaches.
o Example: A cashier in a bank cannot access the financial database.
2. Defence in Depth
o Employs multiple layers of security controls to protect systems.
o Ensures that if one defense fails, others will still protect the system.
o Example: Using firewalls, intrusion detection systems, and encryption together.
3. Separation of Duties
o Divides responsibilities among multiple people or systems to prevent fraud or misuse.
o Example: In financial systems, the person approving a transaction is not the one who
initiates it.
4. Fail-Safe Defaults
o Systems should deny access by default and grant it only when explicitly authorized.
o Example: File permissions default to "no access" unless specified otherwise.
5. Open Design
o Security mechanisms should not depend on secrecy of design; they should rely on the
strength of the implementation.
o Example: Public cryptographic algorithms are preferred over secret ones.
6. Accountability
o Actions performed within a system should be traceable to the responsible entity.
o Example: Logging user activities in audit trails.
7. Economy of Mechanism
o Security mechanisms should be as simple and small as possible to reduce complexity and
errors.
o Example: Minimalistic design in firewalls.
These goals and principles form the foundation of designing secure systems and policies.
Simplified Example:
Single-Level: Like one big map for all addresses.
Multi-Level: Like a folder structure with subfolders.
Inverted: A list for each physical memory block instead of each virtual page.
Hashed: Uses a formula to quickly find where a page is stored.
TLB: Keeps frequently used addresses handy, like a shortcut.
Segmented Paging: Combines folders (segments) with maps (pages).
Each method balances memory efficiency and speed depending on the system's needs!
10.Explain FIFO page replacement algorithm with an example
FIFO Page Replacement Algorithm
The First-In-First-Out (FIFO) page replacement algorithm replaces the page that has been in
memory the longest when a page fault occurs. It uses a queue to manage the pages, where:
The oldest page is at the front of the queue (to be removed first).
The new page is added to the rear of the queue.
Steps of FIFO Page Replacement
1. Maintain a queue to store pages currently in memory.
2. On a page fault (when a required page is not in memory):
o If there is space in the memory, load the page.
o If memory is full, remove the oldest page (front of the queue) and load the new page.
3. Update the queue accordingly.
Example: Consider the following sequence of page requests:
Pages: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5
Assume the memory can hold 3 pages at a time.
Step Page Request Memory Page Fault? Explanation
State
1 1 [1] Yes Page 1 loaded into memory.
2 2 [1, 2] Yes Page 2 added, memory not full yet.
3 3 [1, 2, 3] Yes Page 3 added, memory now full.
4 4 [4, 2, 3] Yes Page 1 (oldest) replaced by page 4.
5 1 [4, 1, 3] Yes Page 2 replaced by page 1.
6 2 [4, 1, 2] Yes Page 3 replaced by page 2.
7 5 [5, 1, 2] Yes Page 4 replaced by page 5.
8 1 [5, 1, 2] No Page 1 already in memory.
9 2 [5, 1, 2] No Page 2 already in memory.
10 3 [3, 1, 2] Yes Page 5 replaced by page 3.
11 4 [4, 1, 2] Yes Page 3 replaced by page 4.
12 5 [5, 1, 2] Yes Page 4 replaced by page 5.
Result
Total Page Requests: 12
Total Page Faults: 9
Page Hits: 3 (requests for pages 1 and 2 after they were already loaded).
Key Points
1. Simple but not always efficient.
2. May replace a page that will be used soon (no future consideration).
3. Suffers from Belady's Anomaly, where increasing memory size can increase page faults.
Each process has a section of code, called a critical section in which the process may be
changing common variables, updating a table, writing a file and so on. The important feature of
the system is that when one process is executing its critical section, no other process is to be
allowed to execute in its critical section. Thus, the execution of critical sections by process is
mutually exclusive in time.
Requirements to Solve the Problem:
1. Mutual Exclusion: Only one process can access the critical section at a time.
2. Progress: If no process is in the critical section, any process wanting to enter should eventually
get a chance.
3. Bounded Waiting: A process should not wait indefinitely to enter the critical section.
Solution to the Two-Process Critical Section Problem
Peterson’s Algorithm
Peterson’s Algorithm provides a software-based solution for two processes using:
Two flags (flag[i] and flag[j]) to indicate whether a process wants to enter the critical section.
A turn variable to decide which process has priority.
Process i's pseudocode:
flag[i] = true; // Indicate process i wants to enter
turn = j; // Let process j go first if it also wants to enter
while (flag[j] && turn == j) {
// Wait until process j finishes
}
// Critical Section
...
flag[i] = false; // Indicate process i is leaving
Issues with this Solution:
1. Limited to Two Processes: It doesn’t scale to more than two processes.
2. Busy Waiting: Wastes CPU cycles while waiting for the other process.
3. Hardware Dependency: Inefficient on modern architectures with relaxed memory consistency
models.
Producer-Consumer Problem
The Producer-Consumer Problem involves two types of processes:
Producer: Produces data and places it in a buffer.
Consumer: Consumes data from the buffer.
Problem:
If the buffer is full, the producer must wait.
If the buffer is empty, the consumer must wait.
Solution Using Semaphores:
1. Shared Variables:
o mutex (binary semaphore): Ensures mutual exclusion during buffer access.
o empty (counting semaphore): Counts the number of empty slots in the buffer.
o full (counting semaphore): Counts the number of filled slots in the buffer.
2. Producer Code:
wait(empty); // Wait if no empty slots
wait(mutex); // Enter critical section
// Add item to buffer
signal(mutex); // Exit critical section
signal(full); // Increment count of filled slots
3. Consumer Code:
wait(full); // Wait if buffer is empty
wait(mutex); // Enter critical section
// Remove item from buffer
signal(mutex); // Exit critical section
signal(empty); // Increment count of empty slots
This ensures mutual exclusion and synchronization between producer and consumer.
Summary
The critical section problem is about managing shared resources safely.
Requirements include mutual exclusion, progress, and bounded waiting.
Peterson’s Algorithm solves the problem for two processes but has limitations.
The producer-consumer problem demonstrates practical synchronization challenges and
solutions using semaphores.
Mutexes are effective software tools for solving critical section problems in modern systems.
2 Paging divides program into fixed size pages. It divides program into variable size segments.
8 Logical address = page number + page offset. Logical address = segment number + segment offset.
9 Page table maintain the page information. Segment Table maintains the segment information
10 Page Table Entry = Frame number + flag bits. Segment Table Entry = Base address + protection bits.
14.Methods of File Allocation
There are three methods:
1. Contiguous Allocation
2. Linked Allocation
3. Indexed Allocation
1. Contiguous Allocation
Files are stored in continuous blocks on the disk.
Access time is fast because blocks are located together.
Directory stores the starting address and length of the file.
Advantages:
Easy to implement.
Minimal seek time and better I/O performance.
Disadvantages:
Difficult to find continuous free space for large files.
File size needs to be known in advance.
2. Linked Allocation
Files are stored as a linked list of disk blocks scattered anywhere on the disk.
Each block has a pointer to the next block.
Directory stores pointers to the first and last blocks of the file.
Advantages:
No external fragmentation.
File size can grow dynamically as long as free blocks are available.
Disadvantages:
Works well for sequential access only.
Additional space is needed for pointers in each block.
3. Indexed Allocation
Each file has an index block that contains pointers to all the file's disk blocks.
Directory stores pointers to the index blocks of files.
Advantages:
Solves problems of contiguous and linked allocation.
Allows direct access to file blocks.
Disadvantages:
Requires extra space for index blocks.
Overhead increases with large files.