0% found this document useful (0 votes)
5 views

Operating System Security 2

Uploaded by

noorax2003
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Operating System Security 2

Uploaded by

noorax2003
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 12

Operating system security

Lecture 1.
What is an Operating System? Defining operating system.
 a program that manages computer hardware and connects the computer user to the
computer hardware.

its goals:
 to make the interaction of the user with the computer convenient
 use computer hardware in an efficient manner.

Listing the 4 computer main components:


1. Users
2. system and application layer
3. operating system
4. hardware

Computer System Structure:


a. Four Components of a Computer System. (Diagram)

What Operating Systems do?


a) User View:
 Mouse/keyboard/monitor
 It’s usually the devices that helps the user interact with the computer software.
 Don’t care about resource utilization

b) Computer Systems View:


 It manages system resources.
 Manages the execution of programs.

Computer System Organization and Operation:


I. Organization:
 One or more CPU’s and device controllers manage multiple devices with
using shared memory
 Each controller controls a type of device.

II. Operation:
 I/O devices and CPU execute in parallel.
 Each device has a buffer.

 CPU moves data from buffer to main memory.

Storage:
Storage Definitions:
 It’s the device used to provide space to save data for the computer.

Structure:
1. Main memory: large storage media that the CPU can access (volatile—RAM)
2. secondary storage: extension of main memory but non-volatile and is used for large
storage capacity (HDD—SSD)
a. HDD: hard disk drive which has discs that are divided into different
sectors.
b. SSD: solid state drive faster than HDD with advanced tech and is more
popular these days.

Hierarchy:
 There are three points to keep in mind:
1. speed
2. cost
3. volatility

Caching:

 For caching it’s the process of moving information from the slow storage to the faster storage
cache
 This helps speed up tasks by reducing the need to fetch data from slower storage.

I/O Structure:
 Device controllers move data between devices and storage.
 The OS uses device drivers to control these controllers.
 When you start an operation (like copying a file), the device controller handles the
transfer and notifies the OS when it's done.

Direct Memory Access (DMA) Structure:


 DMA is used for fast devices that can transfer data directly to memory without involving
the CPU.
 This reduces the number of interruptions and speeds up data transfers.

Lecture 2.
Single Processor Computer-System Architecture vs. Multi-Processor Computer-System
Architecture:

Single processor:
 its the system that has one main CPU that is used to execute general purpose instruction
sets and user interaction processes(mouse and keyboard inputs)
 Not user Processes
 -If they are managed by the os The os sends them there next task and monitor there
status

Multi-processor:
 Multi-processor systems, also called parallel or multicore systems, have two or more
processors working together.
 They share resources like memory and the computer's bus.
 These systems offer advantages like faster performance, cost-effectiveness, and better
reliability.

Multi-processor Systems. (Diagram)


Symmetric Vs. Asymmetric Multiprocessing:
 Symmetric multiprocessing assigns tasks dynamically to each processor.
 Asymmetric multiprocessing dedicates specific tasks to each processor.

Symmetric Multiprocessing Architecture:


A Dual-Core Design:

Operating System Structure:


Multiprogramming:
 In multiprogramming, the OS keeps several programs in memory simultaneously.
 This ensures the CPU always has something to do, increasing efficiency.

Timesharing (Multitasking):
 Timesharing lets the CPU switch between multiple tasks quickly.
 This creates interactive computing, allowing users to interact with programs as they run.

Operating-System Operations:
Transition from User to Kernel Mode. (Diagram)
 Think of the computer like a house with two doors: one for regular people (users) and
one for special people (the operating system).
 The operating system is like the boss of the house, with special powers.
 When a regular person wants something special done, they knock on the door of the
operating system.
 The timer is like a clock that prevents someone from staying too long inside, like a guest
who can't overstay their welcome.
 It's set to remind everyone to take turns using the special door, so no one hogs all the
resources.
 The operating system sets this timer and keeps track of when it's time for someone else
to come in.
Process Management:
 A process is a program in execution
 Program is a passive entity
 Process is an active entity
 it needs resources to accomplish its task e.g. memory- I/O, files
 process termination: is when reclaiming the reusable resources.
 Single-threaded process: has one program counter specifying location of next instruction
to execute.
 Multi-threaded process has one program counter per thread

Memory Management:
 Memory management ensures programs have the memory they need to run.
 It involves keeping track of which parts of memory are in use and allocating space as
needed.

Caching(Migration of data “A” from Disk to Register):


 Caching stores frequently used data in faster storage for quick access.
 It helps speed up tasks by reducing the need to fetch data from slower storage.

Protection and Security:


 Protection controls access to resources like files or devices.
 Security defends the system against attacks, both internal and external.

Computing Environments:
a. Distributed:
 In distributed computing, separate systems work together over a network.
 This increases computation speed, functionality, and data availability.

b. Client-Server:
 Client-server computing involves servers responding to requests from clients.
 Servers can be compute servers (providing services like databases) or file servers
(for storing and retrieving files).

c. Virtualization:
 Virtualization lets operating systems run applications within other OSes.
 It's used for tasks like running multiple OSes on a single machine or testing
software compatibility.

d. Cloud Computing:
 Cloud computing delivers computing services over a network.
 It includes different models like Software as a Service (SaaS) or Infrastructure as a
Service (IaaS).

Open-Source Operating Systems. (Example)


 Open-source OSes provide their source code, allowing users to modify and distribute it
freely.
 Examples include GNU/Linux and BSD UNIX, offering alternatives to closed-source
systems.

Lecture 3.
Process Concept and Process in Memory:

Process Concept:
 An operating system runs various programs, which can be jobs in batch systems or user
tasks in time-shared systems.
 A process is a program in execution, progressing through instructions sequentially.
 It includes components like the program counter, stack, data section, and heap.

Process in Memory:
 A program becomes a process when loaded into memory, usually by double-clicking an
icon or entering a command.
 Even if multiple processes come from the same program, they're separate and have their
own resources.
 Processes can also spawn other processes as they run.

Process State. (Diagram)


 Processes have different states like new, ready, running, waiting, or terminated.
Process Control Block (PCB):
 A Process Control Block (PCB) holds important information about each process, including
its state, program counter, CPU registers, and more.

Process Scheduling Queues:

 Job queue – set of all processes in the system.


 Ready queue – set of all processes in main memory, ready and waiting to execute.
 Device queues – set of processes waiting for an I/O device.
 Processes migrate among the various queues.

Schedulers:
 Long-term schedulers load processes from storage into memory.
 Short-term schedulers select processes ready for execution and allocate the CPU to
them.
 Medium-term schedulers manage swapping processes in and out of memory as needed.

Context Switch:
 Interrupts trigger a context switch, where the CPU switches from one process to
another.
 This involves saving the current state of the old process and loading the state of the new
process.

Process Creation and Termination:


 Processes can create children processes, forming a process tree.
 They share resources based on different models like parent and child executing
concurrently or the parent waiting for the child to terminate.
 Process termination occurs when a process finishes execution, deallocating resources.

Interposes Communication:
a. Communications Models: (Diagram)
 Processes communicate and synchronize their actions through message passing
or shared memory.
 Message passing involves sending and receiving messages between processes,
either synchronously or asynchronously.
 Shared memory allows processes to share data by accessing the same memory
locations.

Lecture 4.
CPU Scheduling and CPU Scheduler:
 CPU scheduling is crucial for multi-programmed operating systems, enhancing
computer productivity by switching the CPU among processes.
 Multiprogramming maximizes CPU utilization.
 Process execution involves alternating CPU execution and I/O wait cycles.

 Short-Term Scheduler:

o The short-term scheduler selects processes from the ready queue and
assigns the CPU to one of them.
o Decisions occur when processes switch states or terminate.
o Scheduling can be preemptive or nonpreemptive.

Dispatcher:
 The dispatcher module transfers CPU control to the selected process, involving context
switching and mode switching.

Scheduling Criteria:
i. Scheduling Algorithm Optimization Criteria:
o Optimization criteria include maximizing CPU utilization, throughput,
minimizing turnaround time, waiting time, and response time.
o Average measures are typically optimized.

CPU-Scheduling algorithms:
a. First- Come, First-Served (FCFS) Scheduling:
Process Burst Time
P1 24
P2 3
P3 3
¨ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30

¨ Waiting time for P1 = 0; P2 = 24; P3 = 27


Average waiting time: (0 + 24 + 27)/3 = 17
Suppose that the processes arrive in the order:
P2 , P3 , P1
¨ The Gantt chart for the schedule is:
P2 P3 P1
0 3 6 30

¨ Waiting time for P1 = 6; P2 = 0; P3 = 3


¨ Average waiting time: (6 + 0 + 3)/3 = 3
¨ Much better than previous case
¨ Convoy effect - short process behind long process
¤ Consider one CPU-bound and many I/O-bound processes

b. Shortest-Job-First (SJF) Scheduling:


 Selects the process with the shortest CPU burst.
 Optimal for minimizing average waiting time.
 Difficulties arise in predicting burst times.
ProcessArriva l TiBurst Time
P1 0.0 6
P2 2.0 8
P3 4.0 7
P4 5.0 3
¨ SJF scheduling chart
P4 P1 P3 P2
0 3 9 16 24

¨ Average waiting time = (3 + 16 + 9 + 0) / 4 = 7

c. Shortest-remaining-time-first:
¨ Now we add the concepts of varying arrival times and preemption to the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
¨ Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26

¨ Average waiting time = [(10-1)+(1-1)+(17-2)+5-3)]/4 = 26/4 = 6.5 msec

d. Priority Scheduling:

 Each process is assigned a priority.


 The CPU is allocated to the highest priority process.
 Can lead to starvation, mitigated by aging.
 Problem  Starvation – low priority processes may never execute
 Solution  Aging – as time progresses increase the priority of the process
ProcessA arri Burst TimeT Priority
P1 10 3
P2 1 1
P3 2 4
P4 1 5
P5 5 2
¨ Priority scheduling Gantt Chart

Average waiting time = 8.2 msec

e. Round Robin (RR):


 Each process gets a time quantum of CPU time.
 Preemptive, with a timer interrupt every quantum to schedule the next process.
 Performance depends on the quantum size and context switch time.
a. q large  FIFO
b. q small  q must be large with respect to context switch, otherwise overhead is
too high

i. Time Quantum and Context Switch Time:

Lecture 5.
Resources Overview:
 A resource is something a process uses, often limited.
 Examples include printers, semaphores, tables in databases.
 Processes need access to resources in a reasonable order.
 Resources can be preemptable (taken away without issues) or nonpreemptable (removal
causes failure).

System Model:
 System comprises various resource types (R1, R2, ..., Rm) like CPU cycles, memory, I/O
devices.
 Each resource type has a fixed number of instances (Wi).
 Sequence to use a resource: request, use, release.
Resource Usage:
 Denied resource requests lead to options:
 Block and wait.
 Continue without it.
 Process fails with an error code.

Understanding Deadlocks:
 Deadlocks occur when processes are mutually blocked due to exclusive access to
resources.
 Each deadlocked process waits for a resource held by another.

Characteristics of Deadlocks:
 Mutual exclusion: Only one process can use a resource at a time.
 Hold and wait: Processes hold resources while waiting for others.
 No preemption: Resources can't be forcefully taken.
 Circular wait: Processes form a circular chain, each waiting for the next's resource.

Dealing with Deadlocks:


 Options for the OS include:
 Ignoring the problem (risky).
 Detecting and recovering from deadlocks.
 Dynamically avoiding deadlock via careful resource allocation.
 Preventing deadlocks by removing one of the four necessary conditions.

Lecture 6.
What is Swapping?
 Swapping involves temporarily moving a process out of memory to a disk and then
bringing it back when needed.
 It allows the total memory usage to exceed physical memory, increasing the degree of
multiprogramming.

Backing Store:
 A fast disk serves as the backing store, large enough to hold copies of memory images
for all users.
 It provides direct access to these memory images.
How Swapping Works:
 The system maintains a ready queue of processes with memory images on disk.
 Swap out, swap in: Lower-priority processes are swapped out to make room for higher-
priority ones.
 Transfer time is a significant part of swapping, directly proportional to the memory size
being swapped.

Schematic View of Swapping:

Context Switch Time with Swapping:


 If the next process isn't in memory, a swap out and swap in process is needed.
 This can lead to high context switch time.
 For example, swapping a 100MB process with a 50MB/sec transfer rate takes about 4
seconds.

Optimizing Swapping:
 Reduce memory size being swapped by knowing actual memory usage.
 Constraints like pending I/O prevent swapping.
 Modern OSes use modified swapping techniques, swapping only when free memory is
critically low.
 They may also swap portions of processes instead of entire processes to decrease swap
time.

You might also like