0% found this document useful (0 votes)
17 views

Unit 1 Os

The document discusses the components of a computer system including the central processing unit (CPU) and memory. The CPU contains the control unit which controls instruction execution and the arithmetic logic unit (ALU) which performs arithmetic and logic operations. Computer memory comes in different levels including register memory inside the CPU, cache memory close to the CPU for faster access, and primary and secondary storage. Cache memory improves performance by storing frequently used data and instructions from main memory.

Uploaded by

Abheer Dutt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
17 views

Unit 1 Os

The document discusses the components of a computer system including the central processing unit (CPU) and memory. The CPU contains the control unit which controls instruction execution and the arithmetic logic unit (ALU) which performs arithmetic and logic operations. Computer memory comes in different levels including register memory inside the CPU, cache memory close to the CPU for faster access, and primary and secondary storage. Cache memory improves performance by storing frequently used data and instructions from main memory.

Uploaded by

Abheer Dutt
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 23

UNIT 1 OPERATING SYSTEMS

INTRODUCTION TO COMPUTER SYSTEM

A computer is an electronic device that can be programmed to accept data


(input), process it and generate result (output). A computer along with
additional hardware and software together is called a computer system.

A computer system primarily comprises a central processing unit (CPU),


memory, input/output devices and storage devices..

―A computer would deserve to be called intelligent if it could deceive a


human into believing that it was human.‖ –Alan Turing

Central Processing Unit (CPU) It is the electronic circuitry of a computer that


carries out the actual processing and usually referred as the brain of the
computer. It is commonly called processor also. Physically, a CPU can be
placed on one or more microchips called integrated circuits (IC). The ICs
comprise semiconductor materials.

The CPU is given instructions and data through programs. The CPU then
fetches the program and data from the memory and performs arithmetic
and logic operations as per the given instructions and stores the result back
to memory.

While processing, the CPU stores the data as well as instructions in its local
memory called registers. Registers are part of the CPU chip and they are
limited in size and number. Different registers are used for storing data,
instructions or intermediate results.

Other than the registers, the CPU has two main components — Arithmetic
Logic Unit (ALU) and Control Unit (CU). ALU performs all the arithmetic and
logic operations that need to be done as per the instruction in a program.
CU controls sequential instruction execution interprets instructions and
guides data flow through the computer‘s memory, ALU and input or output
devices. CPU is also popularly known as microprocessor.

Control Unit: It is the circuitry in the control unit, which makes use of
electrical signals to instruct the computer system for executing already
stored instructions. It takes instructions from memory and then decodes
and executes these instructions. So, it controls and coordinates the
functioning of all parts of the computer. The Control Unit's main task is to
maintain and regulate the flow of information across the processor. It does
not take part in processing and storing data.

ALU: It is the arithmetic logic unit, which performs arithmetic and logical
functions. Arithmetic functions include addition, subtraction, multiplication
division, and comparisons. Logical functions mainly include selecting,
comparing, and merging the data.

What is CPU Clock Speed?

The clock speed of a CPU or a processor refers to the number of instructions


it can process in a second. It is measured in gigahertz. For example, a CPU
with a clock speed of 4.0 GHz means it can process 4 billion instructions in
a second.

Software

Software, is a set of programs that enables the hardware to perform a


specific task. All the programs that run the computer are software. The
software can be of three types: system software, application software, and
programming software.

1) System Software

The system software is the main software that runs the computer. When you
turn on the computer, it activates the hardware and controls and
coordinates their functioning. The application programs are also controlled
by system software. An operating system is an example of system software.

i) Operating System:

An operating system is the system software that works as an interface to


enable the user to communicate with the computer. It manages and
coordinates the functioning of hardware and software of the computer.

BIOS: It stands for basic input output system. It is a type of system


software, which is stored in Read Only Memory (ROM) located on the
motherboard. However, in advanced computer systems, it is stored in flash
memory. BIOS is the first software that gets activated when you turn on
your computer system. It loads the drivers of the hard disk into memory as
well as assists the operating system to load itself into the memory.

o Boot Program: Boot refers to starting up a computer. When you


switch on the computer, the commands in the ROM are executed
automatically to load the boot program into memory and execute its
instructions. The BIOS program has a basic set of commands that
enables the computer to perform the basic input/output instructions
to start the computer.
o An assembler: It plays the role of a converter as it receives basic
computer instructions and converts them into a pattern of bits. The
processor uses these bits to perform basic operations.
o A device driver: This system software controls hardware devices
connected to a computer. It enables the computer to use the hardware
by providing an appropriate interface. The kernel of a Computer's CPU
communicates with different hardware through this software.
Operating systems generally come with most of the device drivers. If
the operating system does not have a device driver for hardware, you
have to install the device driver before using that hardware device.

2) Application Software:

Application software is a set of programs designed to perform a specific task.


It does not control the working of a computer as it is designed for end-users.
A computer can run without application software.

Computer Memory

The computer memory holds the data and instructions needed to process
raw data and produce output. The computer memory is divided into large
number of small parts known as cells. Each cell has a unique address which
varies from 0 to memory size minus one.

Computer memory is of two types: Volatile (RAM) and Non-volatile (ROM). The
secondary memory (hard disk) is referred as storage not memory.

But, if we categorize memory on behalf of space or location, it is of four


types:

o Register memory
o Cache memory
o Primary memory
o Secondary memory

Register Memory

Register memory is the smallest and fastest memory in a computer. It is not


a part of the main memory and is located in the CPU in the form of registers,
which are the smallest data holding elements. A register temporarily holds
frequently used data, instructions, and memory address that are to be used
by CPU. They hold instructions that are currently processed by the CPU. All
data is required to pass through registers before it can be processed. So,
they are used by CPU to process the data entered by the users.

Registers hold a small amount of data around 32 bits to 64 bits. The speed
of a CPU depends on the number and size (no. of bits) of registers that are
built into the CPU. Registers can be of different types based on their uses.
Some of the widely used Registers include Accumulator or AC, Data Register
or DR, the Address Register or AR, Program Counter (PC), I/O Address
Register, and more.

Cache Memory

Cache memory is a high-speed memory, which is small in size but faster


than the main memory (RAM). The CPU can access it more quickly than the
primary memory. So, it is used to synchronize with high-speed CPU and to
improve its performance.

Cache memory can only be accessed by CPU. It can be a reserved part of the
main memory or a storage device outside the CPU. It holds the data and
programs which are frequently used by the CPU. So, it makes sure that the
data is instantly available for CPU whenever the CPU needs this data. In
other words, if the CPU finds the required data or instructions in the cache
memory, it doesn't need to access the primary memory (RAM). Thus, by
acting as a buffer between RAM and CPU, it speeds up the system
performance.

Types of Cache Memory:

L1: It is the first level of cache memory, which is called Level 1 cache or L1
cache. In this type of cache memory, a small amount of memory is present
inside the CPU itself. If a CPU has four cores (quad core cpu), then each core
will have its own level 1 cache. As this memory is present in the CPU, it can
work at the same speed as of the CPU. The size of this memory ranges from
2KB to 64 KB. The L1 cache further has two types of caches: Instruction
cache, which stores instructions required by the CPU, and the data cache
that stores the data required by the CPU.
L2: This cache is known as Level 2 cache or L2 cache. This level 2 cache
may be inside the CPU or outside the CPU. All the cores of a CPU can have
their own separate level 2 cache, or they can share one L2 cache among
themselves. In case it is outside the CPU, it is connected with the CPU with
a very high-speed bus. The memory size of this cache is in the range of 256
KB to the 512 KB. In terms of speed, they are slower than the L1 cache.

L3: It is known as Level 3 cache or L3 cache. This cache is not present in all
the processors; some high-end processors may have this type of cache. This
cache is used to enhance the performance of Level 1 and Level 2 cache. It is
located outside the CPU and is shared by all the cores of a CPU. Its memory
size ranges from 1 MB to 8 MB. Although it is slower than L1 and L2 cache,
it is faster than Random Access Memory (RAM).

How does cache memory work with CPU?

When CPU needs the data, first of all, it looks inside the L1 cache. If it does
not find anything in L1, it looks inside the L2 cache. If again, it does not find
the data in L2 cache, it looks into the L3 cache. If data is found in the cache
memory, then it is known as a cache hit. On the contrary, if data is not
found inside the cache, it is called a cache miss.

If data is not available in any of the cache memories, it looks inside the
Random Access Memory (RAM). If RAM also does not have the data, then it
will get that data from the Hard Disk Drive.

So, when a computer is started for the first time, or an application is opened
for the first time, data is not available in cache memory or in RAM. In this
case, the CPU gets the data directly from the hard disk drive. Thereafter,
when you start your computer or open an application, CPU can get that data
from cache memory or RAM.

Memory Units

Memory units are used to measure and represent data. Some of the
commonly used memory units are:

DATA TRANSFER BETWEEN MEMORY AND CPU


Data need to be transferred between the CPU and primary memory as well
as between the primary and secondary memory.

Data are transferred between different components of a computer system


using physical wires called bus. For example, bus is used for data transfer
between a USB port and hard disk or between a hard disk and main
memory. Bus is of three types— (i) Data bus to transfer data between
different components, (ii) Address bus to transfer addresses between CPU
and main memory. The address of the memory location that the CPU wants
to read or write from is specified in the address bus,and (iii) Control bus to
communicate control signals between different components of a computer.
All these three buses collectively make the system bus, as shown in figure.

As the CPU interacts directly with main memory, any data entered from
input device or the data to be accessed from hard disk needs to be placed in
the main memory for further processing. The data is then transferred
between CPU and main memory using bus.

The CPU places on the address bus, the address of the main memory
location from which it wants to read data or to write data. While executing
the instructions, the CPU specifies the read or write control signal through
the control bus.

As the CPU may require to read data from main memory or write data to
main memory, a data bus is bidirectional. But the control bus and address
bus are unidirectional. To write data into memory, the CPU places the data
on the data bus, which is then written to the specific address provided
through the address bus. In case of read operation, the CPU specifies the
address, and the data is placed on the data bus by a dedicated hardware,
called memory controller. The memory controller manages the flow of data
into and out of the computer's main memory.

Basic Linux commands

1) uname command

The command ‗uname‗ displays the information about the system.


Syntax: uname [OPTION]

-a option: It prints all the system information in the following


order: Kernel name, network node hostname, kernel release date, kernel
version, machine hardware name, hardware platform, operating system
.
Syntax:
$uname -a

-s option: It prints the kernel name.


Syntax: $uname -s

3. -n option: It prints the hostname of the network node(current


computer).
Syntax: $uname -n

2) lscpu command

lscpu is a command-line utility to display information about the CPU


architecture. It reads the CPU architecture information
from sysfs and /proc/cpuinfo files and prints in the terminal. The
information includes the number of CPUs, threads, cores, sockets, and Non-
Uniform Memory Access (NUMA) nodes. It also displays CPU caches and
cache sharing, family, model, bogoMIPS, byte order, and stepping.

3) lshw command
lshw(list hardware) is a small Linux/Unix tool which is used to generate the
detailed information of the system‘s hardware configuration from various
files in the /proc directory. lshw can also report exact memory configuration,
firmware version, mainboard configuration, CPU version and speed, cache
memory configuration, bus speed, etc on DMI-capable x86 or IA-64(Itanium
family of 64 microprocessors) system and some PowerPC machine.

Syntax: lshw [-format] [-options ...]

Where format can be:


 -html: Output hardware tree as HTML.
 -xml: Output hardware tree as XML.
 -short: Output hardware paths.
 -businfo: Output bus information.

4) lspci :- lspci stands for list pci. Think of this command as ―ls‖ + ―pci‖.

This will display information about all the PCI bus in your server. Apart
from displaying information about the bus, it will also display information
about all the hardware devices that are connected to your PCI and PCIe bus.

5) Hwinfo Command in Linux


hwinfo (Hardware Information) is a very useful command to check
details about hardware components. It shows information about most
hardware units including CPU, RAM, graphic cards, USB controllers, …
hwinfo uses libhd library to gather hardware information on the system.

$ hwinfo [options]

[options]

–help explains how to use

$ hwinfo –help

6) lspci : lspci is a utility for displaying information about all PCI buses in
the system and all devices connected to them.
What is PCI?
PCI, or Peripheral Component Interconnect is an interface to add additional
hardware components to a computer system.

7) lsusb command in Linux with Examples


The lsusb command in Linux is used to display the information about USB
buses and the devices connected to them. The properties displayed are
speed, BUS, class, type details, etc.
Syntax:
lsusb [ options ]

8) df command in Linux with Examples


The df command (short for disk free), is used to display information related
to file systems about total space and available space.
Syntax :
df [OPTION]... [FILE]...
If no file name is given, it displays the space available on all currently
mounted file systems.
For example :
df

9) Cat command in Linux


Cat(concatenate) command is very frequently used in Linux. It reads data
from the file and gives their content as output. It helps us to create, view,
concatenate files. So let us see some frequently used cat commands.

Memory Hierarchy Design and its Characteristics


In the Computer System Design, Memory Hierarchy is an enhancement to
organize the memory such that it can minimize the access time. The Memory
Hierarchy was developed based on a program behaviour known as locality
of references.
This Memory Hierarchy Design is divided into 2 main types:
1. External Memory or Secondary Memory –
Comprising of Magnetic Disk, Optical Disk, Magnetic Tape i.e. peripheral
storage devices which are accessible by the processor via I/O Module.
2. Internal Memory or Primary Memory –
Comprising of Main Memory, Cache Memory & CPU registers. This is
directly accessible by the processor.

What is Interrupt in OS?

An interrupt is a signal emitted by hardware or software when a process or


an event needs immediate attention. It alerts the processor to a high-priority
process requiring interruption of the current working process. In I/O
devices, one of the bus control lines is dedicated for this purpose and is
called the Interrupt Service Routine (ISR).

WORKING:
When a device raises an interrupt at the process, the processor first
completes the execution of an instruction. Then it loads the Program
Counter (PC) with the address of the first instruction of the ISR. Before
loading the program counter with the address, the address of the
interrupted instruction is moved to a temporary location. Therefore, after
handling the interrupt, the processor can continue with the process.
While the processor is handling the interrupts, it must inform the device
that its request has been recognized to stop sending the interrupt request
signal. Also, saving the registers so that the interrupted process can be
restored in the future increases the delay between the time an interrupt is
received and the start of the execution of the ISR. This is called Interrupt
Latency.
A single computer can perform only one computer instruction at a time. But,
because it can be interrupted, it can manage how programs or sets of
instructions will be performed. This is known as multitasking. It allows the
user to do many different things simultaneously, and the computer turns to
manage the programs that the user starts. Of course, the computer operates
at speeds that make it seem like all user tasks are being performed
simultaneously.

An operating system usually has some code that is called an interrupt


handler. The interrupt handler prioritizes the interrupts and saves them in
a queue if more than one is waiting to be handled. The operating system has
another little program called a scheduler that figures out which program to
control next.

Types of Interrupt

Interrupt signals may be issued in response to hardware or software events.


These are classified as hardware interrupts or software interrupts,
respectively.
1. Hardware Interrupts

A hardware interrupt is a condition related to the state of the hardware that


may be signaled by an external hardware device, e.g., an interrupt request
(IRQ) line on a PC, or detected by devices embedded in processor logic to
communicate that the device needs attention from the operating system. For
example, pressing a keyboard key or moving a mouse triggers hardware
interrupts that cause the processor to read the keystroke or mouse position.

Hardware interrupts can arrive asynchronously for the processor clock and
at any time during instruction execution. Consequently, all hardware
interrupt signals are conditioned by synchronizing them to the processor
clock and act only at instruction execution boundaries.

In many systems, each device is associated with a particular IRQ signal.


This makes it possible to quickly determine which hardware device is
requesting service and expedite servicing of that device.

On some older systems, all interrupts went to the same location, and the OS
used specialized instruction to determine the highest priority unmasked
interrupt outstanding. On contemporary systems, there is generally a
distinct interrupt routine for each type of interrupt or each interrupts
source, often implemented as one or more interrupt vector tables. Hardware
interrupts are further classified into two types, such as:

o Maskable Interrupts:Processors typically have an internal interrupt


mask register which allows selective enabling and disabling of
hardware interrupts. Each interrupt signal is associated with a bit in
the mask register; on some systems, the interrupt is enabled when the
bit is set and disabled when the bit is clear, while on others, a set bit
disables the interrupt. When the interrupt is disabled, the associated
interrupt signal will be ignored by the processor. Signals which are
affected by the mask are called maskable interrupts.
The interrupt mask does not affect some interrupt signals and
therefore cannot be disabled; these are called non-maskable
interrupts (NMI). NMIs indicate high priority events that need to be
processed immediately and which cannot be ignored under any
circumstances, such as the timeout signal from a watchdog timer.
To mask an interrupt is to disable it, while to unmask an interrupt is
to enable it.
o Spurious Interrupts:
A spurious interrupt is a hardware interrupt for which no source can
be found. The term phantom interrupt or ghost interrupt may also use
to describe this phenomenon. Spurious interrupts tend to be a
problem with a wired-OR interrupt circuit attached to a level-sensitive
processor input. Such interrupts may be difficult to identify when a
system misbehaves.
In a wired-OR circuit, parasitic capacitance charging/discharging
through the interrupt line's bias resistor will cause a small delay
before the processor recognizes that the interrupt source has been
cleared. If the interrupting device is cleared too late in the interrupt
service routine (ISR), there won't be enough time for the interrupt
circuit to return to the quiescent state before the current instance of
the ISR terminates. The result is the processor will think another
interrupt is pending since the voltage at its interrupt request input
will be not high or low enough to establish an unambiguous internal
logic 1 or logic 0. The apparent interrupt will have no identifiable
source, and hence this is called spurious.
A spurious interrupt may result in system deadlock or other
undefined operation if the ISR doesn't account for the possibility of
such an interrupt occurring. As spurious interrupts are mostly a
problem with wired-OR interrupt circuits, good programming practice
in such systems is for the ISR to check all interrupt sources for
activity and take no action if none of the sources is interrupting.

2. Software Interrupts

The processor requests a software interrupt upon executing particular


instructions or when certain conditions are met. Every software interrupt
signal is associated with a particular interrupt handler.

A software interrupt may be intentionally caused by executing a special


instruction that invokes an interrupt when executed by design. Such
instructions function similarly to subroutine calls and are used for various
purposes, such as requesting operating system services and interacting with
device drivers.

Software interrupts may also be unexpectedly triggered by program


execution errors. These interrupts are typically called traps or exceptions.

Handling Multiple Devices

When more than one device raises an interrupt request signal, additional
information is needed to decide which device to consider first. The following
methods are used to decide which device to select first,

1. Polling
In polling, the first device encountered with the IRQ bit set is to be
serviced first, and appropriate ISR is called to service the same. It is
easy to implement, but a lot of time is wasted by interrogating the IRQ
bit of all devices.
2. Vectored Interrupts
In vectored interrupts, a device requesting an interrupt identifies itself
directly by sending a special code to the processor over the bus. This
enables the processor to identify the device that generated the
interrupt. The special code can be the starting address of the ISR or
where the ISR is located in memory and is called the interrupt
vector.
3. Interrupt Nesting
In this method, the I/O device is organized in a priority structure.
Therefore, an interrupt request from a higher priority device is
recognized, whereas a lower priority device is not. The processor
accepts interrupts only from devices/processes having priority more
than it.
Processors priority is encoded in a few bits of PS (Process Status
register), and it can be changed by program instructions that write
into the PS. The processor is in supervised mode only while executing
OS routines, and it switches to user mode before executing application
programs.

Interrupt Handling

We know that the instruction cycle consists of fetch, decode, execute and
read/write functions. After every instruction cycle, the processor will check
for interrupts to be processed. If there is no interrupt in the system, it will
go for the next instruction cycle, given by the instruction register. If there is
an interrupt present, then it will trigger the interrupt handler. The handler
will stop the present instruction that is processing and save its
configuration in a register and load the program counter of the interrupt
from a location given by the interrupt vector table.

After processing the interrupt by the processor, the interrupt handler will
load the instruction and its configuration from the saved register. The
process will start its processing where it's left. This saves the old instruction
processing configuration, and loading the new interrupt configuration is also
called context switching. There are different types of interrupt handlers.

1. First Level Interrupt Handler (FLIH) is a hard interrupt handler or


fast interrupt handler. These interrupt handlers have more jitter while
process execution, and they are mainly maskable interrupts.
2. Second Level Interrupt Handler (SLIH) is a soft interrupt handler
and slow interrupt handler. These interrupt handlers have less jitter.

The interrupt handler is also called an interrupt service routine (ISR). The
main features of the ISR are

o Interrupts can occur at any time, and they are asynchronous, and
ISR's can call for asynchronous interrupts.
o An interrupt service mechanism can call the ISR's from multiple
sources.
o ISR's can handle both maskable and non-maskable interrupts. An
instruction in a program can disable or enable an interrupt handler
call.
o ISR at the beginning of execution will disable other devices interrupt
services. After completion of the ISR execution, it will reinitialize the
interrupt services.
o The nested interrupts are allowed in ISR for diversion to other

ISR. Interrupt Latency

When an interrupt occurs, the service of the interrupt by executing the ISR
may not start immediately by context switching. The time interval between
the occurrence of interrupt and the start of execution of the ISR is called
interrupt latency.

o Tswitch = time taken for context switch


o ΣTexec = The sum of the time interval for executing the ISR
o Interrupt Latency = Tswitch + ΣTexec

How CPU Response to Interrupts

A key point towards understanding how operating systems work


understands what the CPU does when an interrupt occurs. The CPU
hardware does the same for each interrupt, allowing operating systems to
take control away from the currently running user process. The switching of
running processes to execute code from the OS kernel is called a context
switch.

CPUs rely on the data contained in a couple of registers to handle interrupts


correctly. One register holds a pointer to the process control block of the
currently running process, and this register is set each time a process is
loaded into memory. The other register holds a pointer to a table containing
pointers to the instructions in the OS kernel for interrupt handlers and
system calls. The value in this register and contents of the table are set
when the operating system is initialized at boot time. The CPU performs the
following actions in response to an interrupt:

1. Using the pointer to the current process control block, the state and
all register values are saved for use when the process is later
restarted.
2. The CPU mode bit is switched to supervisory
3. Using the pointer to the interrupt handler table and the interrupt
vector, the location of the kernel code to execute is determined. The
interrupt vector is the IRQ for hardware interrupts and an argument
to the interrupt assembly language instruction for software interrupts.
4. Processing is switched to the appropriate portion of the kernel.
The CPU uses a table and the interrupt vector to find OS the code to execute
in response to interrupts. A software interrupt is shown in the below image.

As the computer runs, processing switches between user processes and the
operating system as hardware and software interrupts are received.
Triggering Methods

Each interrupts signal input is designed to be triggered by either a logic


signal level or a particular signal edge (level transition). Level-sensitive
inputs continuously request processor service so long as a particular (high
or low) logic level is applied to the input. Edge-sensitive inputs react to
signal edges: a particular (rising or falling) edge will cause a service request
to be latched. The processor resets the latch when the interrupt handler
executes.

1. Level-triggered

A level-triggered interrupt is requested by holding the interrupt signal at its


particular (high or low) active logic level. A device invokes a level-triggered
interrupt by driving the signal to and holding it at the active level. It negates
the signal when the processor commands it, typically after the device has
been serviced.

The processor samples the interrupt input signal during each instruction
cycle. The processor will recognize the interrupt request if the signal is
asserted when sampling occurs.

Level-triggered inputs allow multiple devices to share a common interrupt


signal via wired-OR connections. The processor polls to determine which
devices are requesting service. After servicing a device, the processor may
again poll and, if necessary, service other devices before exiting the ISR.

2. Edge-triggered

An edge-triggered interrupt is an interrupt signaled by a level transition on


the interrupt line, either a falling edge (high to low) or a rising edge (low to
high). A device wishing to signal an interrupt drives a pulse onto the line
and releases it to its inactive state. If the pulse is too short to be detected by
polled I/O, then special hardware may be required to detect it.

Cache Memory
Cache Memory is a special very high-speed memory. It is used to speed up
and synchronizing with high-speed CPU. Cache memory is costlier than main
memory or disk memory but economical than CPU registers. Cache memory
is an extremely fast memory type that acts as a buffer between RAM
and the CPU. It holds frequently requested data and instructions so that
they are immediately available to the CPU when needed.
Cache memory is used to reduce the average time to access data from the Main
memory. The cache is a smaller and faster memory which stores copies of
the data from frequently used main memory locations. There are various
different independent caches in a CPU, which store instructions and data.
Levels of memory:
 Level 1 or Register –
It is a type of memory in which data is stored and accepted that are
immediately stored in CPU. Most commonly used register is
accumulator, Program counter, address register etc.
 Level 2 or Cache memory –
It is the fastest memory which has faster access time where data is
temporarily stored for faster access.
 Level 3 or Main Memory –
It is memory on which computer works currently. It is small in size and
once power is off data no longer stays in this memory.
 Level 4 or Secondary Memory –
It is external memory which is not as fast as main memory but data
stays permanently in this memory.

Types of Cache –
 Primary Cache –
A primary cache is always located on the processor chip. This cache is
small and its access time is comparable to that of processor registers.
 Secondary Cache –
Secondary cache is placed between the primary cache and the rest of the
memory. It is referred to as the level 2 (L2) cache. Often, the Level 2
cache is also housed on the processor chip.

Locality of reference –
Since size of cache memory is less as compared to main memory. So to
check which part of main memory should be given priority and loaded in
cache is decided based on locality of reference.
Types of Locality of reference
1. Spatial Locality of reference
This says that there is a chance that element will be present in the close
proximity to the reference point and next time if again searched then
more close proximity to the point of reference.
2. Temporal Locality of reference
In this Least recently used algorithm will be used. Whenever there is
page fault occurs within a word will not only load word in main memory
but complete page fault will be loaded because spatial locality of
reference rule says that if you are referring any word next word will be
referred in its register that‘s why we load complete page table so the
complete block will be loaded.

Direct Memory Access (DMA)

Direct memory access (DMA) is a method that allows an input/output


(I/O) device to send or receive data directly to or from the main memory,
bypassing the CPU to speed up memory operations.

The process is managed by a chip known as a DMA controller (DMAC).

Direct Memory Access (DMA) :


DMA Controller is a hardware device that allows I/O devices to directly
access memory with less participation of the processor. DMA controller
needs the same old circuits of an interface to communicate with the CPU
and Input/Output devices.
Fig-1 below shows the block diagram of the DMA controller. The unit
communicates with the CPU through data bus and control lines. Through
the use of the address bus and allowing the DMA and RS register to select
inputs, the register within the DMA is chosen by the CPU. RD and WR are
two-way inputs. When BG (bus grant) input is 0, the CPU can
communicate with DMA registers. When BG (bus grant) input is 1, the CPU
has relinquished the buses and DMA can communicate directly with the
memory.
DMA controller registers :
The DMA controller has three registers as follows.
 Address register – It contains the address to specify the desired location
in memory.
 Word count register – It contains the number of words to be
transferred.
 Control register – It specifies the transfer mode.
Note –
All registers in the DMA appear to the CPU as I/O interface registers.
Therefore, the CPU can both read and write into the DMA registers under
program control via the data bus.
Fig 1- Block Diagram

Explanation :
The CPU initializes the DMA by sending the given information through
the data bus.
 The starting address of the memory block where the data is available (to
read) or where data are to be stored (to write).
 It also sends word count which is the number of words in the memory
block to be read or write.
 Control to define the mode of transfer such as read or write.
 A control to begin the DMA transfer.

Operating System Definition and Function

In the Computer System (comprises of Hardware and software), Hardware


can only understand machine code (in the form of 0 and 1) which doesn't
make any sense to a naive user.

We need a system which can act as an intermediary and manage all the
processes and resources present in the system.
An Operating System can be defined as an interface between user and
hardware. It is responsible for the execution of all the processes, Resource
Allocation, CPU management, File Management and many other tasks.

The purpose of an operating system is to provide an environment in which a


user can execute programs in convenient and efficient manner.

Structure of a Computer System

A Computer System consists of:

o Users (people who are using the computer)


o Application Programs (Compilers, Databases, Games, Video player,
Browsers, etc.)
o System Programs (Shells, Editors, Compilers, etc.)
o Operating System ( A special program which acts as an interface
between user and hardware )
o Hardware ( CPU, Disks, Memory, etc)
Functions of Operating System

Following are some of important functions of an operating System.


 Memory Management
 Processor Management
 Device Management
 File Management
 Security
 Control over system performance
 Job accounting
 Error detecting aids
 Coordination between other software and users
Memory Management
Memory management refers to management of Primary Memory or Main
Memory. Main memory is a large array of words or bytes where each word or
byte has its own address.
Main memory provides a fast storage that can be accessed directly by the
CPU. For a program to be executed, it must in the main memory. An
Operating System does the following activities for memory management −
 Keeps tracks of primary memory, i.e., what part of it are in use by
whom, what part are not in use.
 In multiprogramming, the OS decides which process will get memory
when and how much.
 Allocates the memory when a process requests it to do so.
 De-allocates the memory when a process no longer needs it or has been
terminated.
Processor Management
In multiprogramming environment, the OS decides which process gets the
processor when and for how much time. This function is called process
scheduling. An Operating System does the following activities for processor
management −
 Keeps tracks of processor and status of process. The program
responsible for this task is known as traffic controller.
 Allocates the processor (CPU) to a process.
 De-allocates processor when a process is no longer
required. Device Management
An Operating System manages device communication via their respective
drivers. It does the following activities for device management −
 Keeps tracks of all devices. Program responsible for this task is known
as the I/O controller.
 Decides which process gets the device when and for how much time.
 Allocates the device in the efficient way.
 De-allocates devices.
File Management
A file system is normally organized into directories for easy navigation and
usage. These directories may contain files and other directions.
An Operating System does the following activities for file management −
 Keeps track of information, location, uses, status etc. The collective
facilities are often known as file system.
 Decides who gets the resources.
 Allocates the resources.
 De-allocates the resources.
Other Important Activities
Following are some of the important activities that an Operating System
performs −
 Security − By means of password and similar other techniques, it
prevents unauthorized access to programs and data.
 Control over system performance − Recording delays between request
for a service and response from the system.
 Job accounting − Keeping track of time and resources used by various
jobs and users.
 Error detecting aids − Production of dumps, traces, error messages, and
other debugging and error detecting aids.
 Coordination between other software’s and users − Coordination and
assignment of compilers, interpreters, assemblers and other software to
the various users of the computer systems.

You might also like