Operating System Security 2
Operating System Security 2
Lecture 1.
What is an Operating System? Defining operating system.
a program that manages computer hardware and connects the computer user to the
computer hardware.
its goals:
to make the interaction of the user with the computer convenient
use computer hardware in an efficient manner.
II. Operation:
I/O devices and CPU execute in parallel.
Each device has a buffer.
Storage:
Storage Definitions:
It’s the device used to provide space to save data for the computer.
Structure:
1. Main memory: large storage media that the CPU can access (volatile—RAM)
2. secondary storage: extension of main memory but non-volatile and is used for large
storage capacity (HDD—SSD)
a. HDD: hard disk drive which has discs that are divided into different
sectors.
b. SSD: solid state drive faster than HDD with advanced tech and is more
popular these days.
Hierarchy:
There are three points to keep in mind:
1. speed
2. cost
3. volatility
Caching:
For caching it’s the process of moving information from the slow storage to the faster storage
cache
This helps speed up tasks by reducing the need to fetch data from slower storage.
I/O Structure:
Device controllers move data between devices and storage.
The OS uses device drivers to control these controllers.
When you start an operation (like copying a file), the device controller handles the
transfer and notifies the OS when it's done.
Lecture 2.
Single Processor Computer-System Architecture vs. Multi-Processor Computer-System
Architecture:
Single processor:
its the system that has one main CPU that is used to execute general purpose instruction
sets and user interaction processes(mouse and keyboard inputs)
Not user Processes
-If they are managed by the os The os sends them there next task and monitor there
status
Multi-processor:
Multi-processor systems, also called parallel or multicore systems, have two or more
processors working together.
They share resources like memory and the computer's bus.
These systems offer advantages like faster performance, cost-effectiveness, and better
reliability.
Timesharing (Multitasking):
Timesharing lets the CPU switch between multiple tasks quickly.
This creates interactive computing, allowing users to interact with programs as they run.
Operating-System Operations:
Transition from User to Kernel Mode. (Diagram)
Think of the computer like a house with two doors: one for regular people (users) and
one for special people (the operating system).
The operating system is like the boss of the house, with special powers.
When a regular person wants something special done, they knock on the door of the
operating system.
The timer is like a clock that prevents someone from staying too long inside, like a guest
who can't overstay their welcome.
It's set to remind everyone to take turns using the special door, so no one hogs all the
resources.
The operating system sets this timer and keeps track of when it's time for someone else
to come in.
Process Management:
A process is a program in execution
Program is a passive entity
Process is an active entity
it needs resources to accomplish its task e.g. memory- I/O, files
process termination: is when reclaiming the reusable resources.
Single-threaded process: has one program counter specifying location of next instruction
to execute.
Multi-threaded process has one program counter per thread
Memory Management:
Memory management ensures programs have the memory they need to run.
It involves keeping track of which parts of memory are in use and allocating space as
needed.
Computing Environments:
a. Distributed:
In distributed computing, separate systems work together over a network.
This increases computation speed, functionality, and data availability.
b. Client-Server:
Client-server computing involves servers responding to requests from clients.
Servers can be compute servers (providing services like databases) or file servers
(for storing and retrieving files).
c. Virtualization:
Virtualization lets operating systems run applications within other OSes.
It's used for tasks like running multiple OSes on a single machine or testing
software compatibility.
d. Cloud Computing:
Cloud computing delivers computing services over a network.
It includes different models like Software as a Service (SaaS) or Infrastructure as a
Service (IaaS).
Lecture 3.
Process Concept and Process in Memory:
Process Concept:
An operating system runs various programs, which can be jobs in batch systems or user
tasks in time-shared systems.
A process is a program in execution, progressing through instructions sequentially.
It includes components like the program counter, stack, data section, and heap.
Process in Memory:
A program becomes a process when loaded into memory, usually by double-clicking an
icon or entering a command.
Even if multiple processes come from the same program, they're separate and have their
own resources.
Processes can also spawn other processes as they run.
Schedulers:
Long-term schedulers load processes from storage into memory.
Short-term schedulers select processes ready for execution and allocate the CPU to
them.
Medium-term schedulers manage swapping processes in and out of memory as needed.
Context Switch:
Interrupts trigger a context switch, where the CPU switches from one process to
another.
This involves saving the current state of the old process and loading the state of the new
process.
Interposes Communication:
a. Communications Models: (Diagram)
Processes communicate and synchronize their actions through message passing
or shared memory.
Message passing involves sending and receiving messages between processes,
either synchronously or asynchronously.
Shared memory allows processes to share data by accessing the same memory
locations.
Lecture 4.
CPU Scheduling and CPU Scheduler:
CPU scheduling is crucial for multi-programmed operating systems, enhancing
computer productivity by switching the CPU among processes.
Multiprogramming maximizes CPU utilization.
Process execution involves alternating CPU execution and I/O wait cycles.
Short-Term Scheduler:
o The short-term scheduler selects processes from the ready queue and
assigns the CPU to one of them.
o Decisions occur when processes switch states or terminate.
o Scheduling can be preemptive or nonpreemptive.
Dispatcher:
The dispatcher module transfers CPU control to the selected process, involving context
switching and mode switching.
Scheduling Criteria:
i. Scheduling Algorithm Optimization Criteria:
o Optimization criteria include maximizing CPU utilization, throughput,
minimizing turnaround time, waiting time, and response time.
o Average measures are typically optimized.
CPU-Scheduling algorithms:
a. First- Come, First-Served (FCFS) Scheduling:
Process Burst Time
P1 24
P2 3
P3 3
¨ Suppose that the processes arrive in the order: P1 , P2 , P3
The Gantt Chart for the schedule is:
P1 P2 P3
0 24 27 30
c. Shortest-remaining-time-first:
¨ Now we add the concepts of varying arrival times and preemption to the analysis
ProcessA arri Arrival TimeT Burst Time
P1 0 8
P2 1 4
P3 2 9
P4 3 5
¨ Preemptive SJF Gantt Chart
P1 P2 P4 P1 P3
0 1 5 10 17 26
d. Priority Scheduling:
Lecture 5.
Resources Overview:
A resource is something a process uses, often limited.
Examples include printers, semaphores, tables in databases.
Processes need access to resources in a reasonable order.
Resources can be preemptable (taken away without issues) or nonpreemptable (removal
causes failure).
System Model:
System comprises various resource types (R1, R2, ..., Rm) like CPU cycles, memory, I/O
devices.
Each resource type has a fixed number of instances (Wi).
Sequence to use a resource: request, use, release.
Resource Usage:
Denied resource requests lead to options:
Block and wait.
Continue without it.
Process fails with an error code.
Understanding Deadlocks:
Deadlocks occur when processes are mutually blocked due to exclusive access to
resources.
Each deadlocked process waits for a resource held by another.
Characteristics of Deadlocks:
Mutual exclusion: Only one process can use a resource at a time.
Hold and wait: Processes hold resources while waiting for others.
No preemption: Resources can't be forcefully taken.
Circular wait: Processes form a circular chain, each waiting for the next's resource.
Lecture 6.
What is Swapping?
Swapping involves temporarily moving a process out of memory to a disk and then
bringing it back when needed.
It allows the total memory usage to exceed physical memory, increasing the degree of
multiprogramming.
Backing Store:
A fast disk serves as the backing store, large enough to hold copies of memory images
for all users.
It provides direct access to these memory images.
How Swapping Works:
The system maintains a ready queue of processes with memory images on disk.
Swap out, swap in: Lower-priority processes are swapped out to make room for higher-
priority ones.
Transfer time is a significant part of swapping, directly proportional to the memory size
being swapped.
Optimizing Swapping:
Reduce memory size being swapped by knowing actual memory usage.
Constraints like pending I/O prevent swapping.
Modern OSes use modified swapping techniques, swapping only when free memory is
critically low.
They may also swap portions of processes instead of entire processes to decrease swap
time.