0% found this document useful (0 votes)
36 views

Paging and Fetch Decode

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
36 views

Paging and Fetch Decode

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Operating systems have come a long way since the days of MS-DOS.

With the advent of


powerful CPUs and more memory available, systems have become smarter and more
capable. However, all that power can come at a cost – slower system responses. To avoid
this, modern operating systems employ paging memory management techniques. This
technique allows the operating system to “page” data from memory into and out of
physical blocks on the hard drive. In this blog, we will be talking about Paging in the
Operating System and will check how it works. Before understanding, I recommend you
understand Memory management. This topic is very important from the interview and
exam point of view. I myself was asked questions about this concept in an interview.
 When you have enough space: You have enough space in the refrigerator and will
store the cake by just placing it. This means you were having contiguous space
available in the refrigerator.
 When you don’t have enough space: In this situation, you have to cut the pieces of
cake and put them in the refrigerator wherever space is available. This available
space is in a non-contiguous way. Paging is based on this concept.
What is Paging?
Non-contiguous memory management technique for retrieving processes from the
secondary storage into the main memory.
Paging is a memory management technique for retrieving processes from secondary
memory storage units as pages and stored in the main memory’s frame.
When a program needs to access data, it sends a request to access a process to the
operating system, which stores the process in the main memory from the secondary
memory. Each process is divided into small fixed-sized chunks called pages; similarly, the
main memory will be divided into equal-fixed-sized pieces called frames. The process
pages are stored at a different location in the main memory. The thing to note here is
that the size of the page and frame will be the same.

Like in the above fig, every page in secondary memory is 2 Kb, and in the same way,
every frame in main memory is also 2 Kb.
The problem is that physical memory is finite. When all of the spaces in physical memory
are filled with requests, the operating system has to start swapping the processes which
are not in use to make room for new ones. This process is called swapping.
This process is quite simple. Your computer has a finite amount of memory, which is
used to store programs and data currently being used. When that memory is complete
and you try to open another program or save a document, your computer has to get
creative.
It can’t just delete old files or programs to free up space because that would mean
deleting data or programs you may need later. So instead, it looks for unused memory in
other parts of the computer. When it finds some, it copies the necessary data and
deletes the program or file from memory.
This process is known as paging in operating systems.
Address translation in paging
We have two types of addresses.
1. Logical address: An address generated by the CPU is commonly referred to as a logical
address. A set of logical addresses generated by a program is a logical address space.
2. Physical address: An address seen by the memory unit—that is, the one loaded into
the memory address register—is commonly referred to as a physical address. A set of
physical addresses corresponding to these logical addresses is physical address space.

Let me understand this paging concept.


Firstly we should know that paging follows non-contiguous memory allocation. The
pages are stored(from secondary memory ) in the main memory at different locations
but will be the same size as the page.
Whenever a request to find a page comes, the CPU tries to find that page on main
memory because the access time of main memory is less compared to secondary
memory as main memory is smaller than secondary memory. So, if the CPU wants to find
the page, the CPU generates a logical address. The main memory has a physical address,
so the CPU wants to communicate with the main memory, so the CPU needs to convert
the logical to physical. So, we need something that can give us a physical address
corresponding to the logical address generated by the CPU. That is done by page table,
so what is a page table? The data structure stored in the main memory stores the
mapping between logical address and physical address. A logical address will then be
mapped to an actual physical address which indicates the frame it’s in, and offset
represents the location where the instruction is within the frame.
Here we have a logical address in which the first part represents page number and the
second part represents instruction offset.

Suppose here p=2 and page offset=8. This means the CPU wants to access page 2 of the
main memory and the eighth instruction of that page. But remember, this address
generated was for secondary memory, but the CPU must pick this page from the main
memory. So we need a physical address to access the main memory. So CPU will check
the page table and get the physical address corresponding to that logical address.
NOTE: Different page tables will be there for different processes.

Advantages of Paging
 Conserve memory by only keeping the active pages in memory. This is especially
helpful in large-scale systems where memory is scarce.
 Enables the operating system to manage more processes by allowing each process to
have its dedicated memory space. This maximizes efficiency and performance by
allowing the operating system to schedule and run each process without conflicts.
 Allows for greater flexibility and scalability regarding the size and complexity of the
systems that can be created.
 Parts of the program are allowed to store at different locations in the main memory.
 It solves the problem of external fragmentation.
 Swapping becomes very easy due to equal-sized pages and frames.
Disadvantages of Paging
 It can be very inefficient. When a process needs more memory, the operating system
must find a block of unused memory and copy it to the process.
 This process can take a long time and, in some cases, can even crash the system.
 Paging can cause internal fragmentation, which makes the system run more slowly.
 The page table is there, which takes some memory space.
 Have to maintain a page table for each process.
 Memory access time increases as the page table needs to be accessed.

When to Use Paging


 You have too many processes running and not enough physical memory to store them
all.
 When you have a large process that can be split into multiple pages.
 When you want to load a process into physical memory without loading the entire
process.
Conclusion
Paging is a technique used in most modern operating systems to improve performance.
When an application needs more memory than is currently available, the operating
system will page it out to disk. This allows the application to continue running without
interruption, and the memory can be reclaimed when needed

What is Paging in OS?


The operating system employs paging, a storage technique, to retrieve processes in
the form of pages from secondary storage and position them within primary memory.
The foundational concept underlying paging revolves around partitioning each
process into distinct pages. Consequently, the primary memory also becomes
partitioned into frames.

One process page needs to be saved in each of the available memory frames.
These pages may be kept in different parts of memory, but the basic objective is
always to locate continuous frames or holes. Process pages are typically only saved
in secondary storage until they are required to be in the main memory.

The OS may influence the frame sizes. Every frame needs to have the exact same
size. The page size should be similar to the frame size since the pages used in
paging are mapped into the frames.

Example of Paging in OS

The main memory will be divided into a group of 16 1 KB frames if the main memory is 16 KB
and the frame size is 1 KB. The four processes in the system are P1, P2, P3, and P4. They are
each 4 KB in size. One page can be saved in a single frame because each procedure is divided
into 1 KB pages.
The pages of the processes will be continuously stored because all of the frames are originally
empty. Frames, pages, and their mapping are shown in the diagram below.

Think about the scenario where P2 and P4 are eventually sent to the waiting state.
Now that eight frames are empty, other pages can be loaded in their place. The
process P5, which is 8 KB (8 pages) in size, is located in the ready queue.

We can save the process in multiple places thanks to paging because there are 8
non-contiguous frames that are available in memory. So, rather than loading pages
P2 and P4, we can load page P5.

Now, we have a clear idea about the paging in the operating system. Let’s discuss
page replacement algorithms in os.
Paging is a memory management scheme that eliminates the need for a contiguous
allocation of physical memory. The process of retrieving processes in the form of
pages from the secondary storage into the main memory is known as paging. The
basic purpose of paging is to separate each procedure into pages. Additionally,
frames will be used to split the main memory. This scheme permits the physical
address space of a process to be non – contiguous.
In paging, the physical memory is divided into fixed-size blocks called page frames,
which are the same size as the pages used by the process. The process’s logical
address space is also divided into fixed-size blocks called pages, which are the same
size as the page frames. When a process requests memory, the operating system
allocates one or more page frames to the process and maps the process’s logical
pages to the physical page frames.
The mapping between logical pages and physical page frames is maintained by the
page table, which is used by the memory management unit to translate logical
addresses into physical addresses. The page table maps each logical page number
to a physical page frame number.
Terminologies Associated with Memory Control
 Logical Address or Virtual Address: This is a deal that is generated through
the CPU and used by a technique to get the right of entry to reminiscence. It is
known as a logical or digital deal because it isn’t always a physical vicinity in
memory but an opportunity for a connection with a place inside the device’s
logical address location.
 Logical Address Space or Virtual Address Space: This is the set of all logical
addresses generated via a software program. It is normally represented in
phrases or bytes and is split into regular-duration pages in a paging scheme.
 Physical Address: This is a cope that corresponds to a bodily place in
reminiscence. It is the actual cope with this that is available on the memory unit
and is used by the memory controller to get admission to the reminiscence.
 Physical Address Space: This is the set of all bodily addresses that
correspond to the logical addresses inside the way’s logical deal with place. It is
usually represented in words or bytes and is cut up into fixed-size frames in a
paging scheme.
In a paging scheme, the logical deal with the region is cut up into steady-duration
pages, and every internet web page is mapped to a corresponding body within the
physical deal with the vicinity. The going for walks tool keeps a web internet web
page desk for every method, which maps the system’s logical addresses to its
corresponding bodily addresses. When a method accesses memory, the CPU
generates a logical address, that is translated to a bodily address using the net page
table. The reminiscence controller then uses the physical cope to get the right of
entry to the reminiscence.

How Paging Works?


Paging is a method used by operating systems to manage memory efficiently. It
breaks physical memory into fixed-size blocks called “frames” and logical memory
into blocks of the same size called “pages.” When a program runs, its pages are
loaded into any available frames in the physical memory.
This approach prevents fragmentation issues by keeping memory allocation uniform.
Each program has a page table, which the operating system uses to keep track of
where each page is stored in physical memory. When a program accesses data, the
system uses this table to convert the program’s address into a physical memory
address.
Paging allows for better memory use and makes it easier to manage. It also supports
virtual memory, letting parts of programs be stored on disk and loaded into memory
only when needed. This way, even large programs can run without fitting entirely into
main memory.
 If Logical Address = 31 bit, then Logical Address Space = 231 words = 2 G
words (1 G = 230)
 If Logical Address Space = 128 M words = 27 * 220 words, then Logical Address
= log2 227 = 27 bits
 If Physical Address = 22 bit, then Physical Address Space = 222 words = 4 M
words (1 M = 220)
 If Physical Address Space = 16 M words = 24 * 220 words, then Physical
Address = log2 224 = 24 bits
The mapping from virtual to physical address is done by the Memory Management
Unit (MMU) which is a hardware device and this mapping is known as the paging
technique.
 The Physical Address Space is conceptually divided into a number of fixed-size
blocks, called frames.
 The Logical Address Space is also split into fixed-size blocks, called pages.
 Page Size = Frame Size
Example
 Physical Address = 12 bits, then Physical Address Space = 4 K words
 Logical Address = 13 bits, then Logical Address Space = 8 K words
 Page size = frame size = 1 K words (assumption)

The address generated by the CPU is divided into


 Page number(p): Number of bits required to represent the pages in Logical
Address Space or Page number
 Page offset(d): Number of bits required to represent a particular word in a page
or page size of Logical Address Space or word number of a page or page offset.
In a paging scheme, the physical cope with the area is divided into fixed-length
frames, each of which contains some bytes or words. When a manner is running, its
logical address space is split into constant-size pages, which might be mapped to
corresponding frames within the physical address space.
Physical Address is divided into:
To represent a physical address in this scheme, parts are commonly used:
Frame Range: This is the variety of the frame within the physical cope with the area
that consists of the byte or phrase being addressed. The wide variety of bits required
to represent the body range relies upon the scale of the physical cope with the area
and the size of each frame. For instance, if the physical cope with area carries 2^20
frames and each frame is 4KB (2^12 bytes) in size, then the frame range could
require 20-12 = 8 bits.
Frame Offset: This is the wide variety of the byte or word within the body this is
being addressed. The number of bits required to represent the frame offset relies
upon the size of every frame. For instance, if everybody is 4KB in size, then the frame
offset could require 12 bits. So, a physical address in this scheme may be
represented as follows:
Physical Address = (Frame Number << Number of Bits in Frame Offset) + Frame
Offset, where “<<” represents a bitwise left shift operation.

During the Fetch-Decode-Execute Cycle, the following steps happen:

 Fetch
o The PC is loaded with 0
o The value from the PC (0) is copied to the MAR
o The data from the MAR (0) is sent across the address bus with the instruction to
read the data sent across the control bus
o The data from that location in memory (0) is sent down the data bus to the MDR
o The PC is incremented by 1
 Decode
o The data is sent from the MDR to the CIR where it is split into the opcode and
operand
o This is sent to the CU to be decoded
 Execute
o Which registers are used here will depend on the instruction being executed
 If a value is being inputted (INP) the ACC will store the value
 If a value is being outputted (OUT) this will be the value currently in
the ACC
 If a value is loaded from RAM (LDA) this will be sent across the data bus
from RAM (in the address location in the MAR) to the MDR
 If a value is to be stored (STA) it will take the value from the ACC, send it
to the MDR and then send it across the data bus to RAM (to the address
location in the MAR)
 If a value is being added to or subtracted from another value (ADD/SUB)
 If the LMC code is to branch (BRA/BRZ/BRP) the comparison will take
place in the ALU
Explain which registers are used and their values when the line STA count is executed and
the accumulator is holding the value 9. The label count refers to memory location 16.

How to answer this question:

 The instruction being executed in this example is STA count, so the registers used will be:
o ACC - the accumulator is holding the value "9"
o MDR - the value "9" from the ACC will be copied to the MDR
o MAR - the value 16 will be stored here so the data is sent to memory location 16
 The value that’s in the accumulator (ACC) is going to be stored in memory (RAM)
 To work out where in memory it will be stored we need to know what count represents
 In this question, we've been told it's 16 - "the label count refers to memory location 16"
 You're not always told this in your exam, so you should also be able to count the lines of
code
 It’s best to write the line numbers on the code in the question, to see which line count is
on. (Don’t forget that the first line is memory location 0!)
 "Count DAT 0 "is on line 16, so the value "16" is what goes to the MAR register. (You
already know this from the question above)
 It is always the value from the accumulator (ACC) that is stored, so the value "9" must
go to the MDR as data can only be sent to memory from the MDR
 Then the value is sent to the memory location, in this case 16

Answer:

Example answer that gets full marks:

The contents of the accumulator (9 in this case) will be copied to the MDR [1 mark] and then 9
is copied to location 16.

Acceptable answers you could have given instead:

The value 16 is copied to the MAR.


The central processing unit (CPU) is like the brain of the computer. It
works by following a basic process called the fetch-decode-execute cycle, or
instruction cycle. This cycle is how the CPU handles instructions from a
program. In this article, we’ll look at each step of this cycle and explain why
it’s important for making the computer work.

THE THREE STAGES OF THE INSTRUCTION CYCLE

1️. Fetch: Retrieving the Instruction


The first step in the instruction cycle is fetching the instruction from
memory. The CPU relies on a special register called the program counter
(PC) to keep track of the address of the next instruction to be executed.
 The PC points to the memory address of the instruction.

 The CPU sends a signal to the memory to retrieve the instruction at that
address.

 The retrieved instruction is then stored in the instruction register (IR).

This stage is crucial because it ensures that the CPU knows what task to
perform next.

2. Decode: Interpreting the Instruction


Once the instruction is fetched, the next step is to decode it. The control unit
(CU) of the CPU is responsible for this process.

 The CPU breaks down the instruction into its components, typically
an operation code (opcode -> The part of a machine language
instruction that specifies the operation to be performed)
and operands(The components of an instruction that provide the data
or addresses needed for the operation).

 The opcode indicates the type of operation to be performed, such as


addition, subtraction, data transfer, or control operations like jumps.

 The operands specify the data to be used or the addresses of the data
involved.

Decoding translates the instruction into signals that can trigger the
appropriate actions within the CPU.

3. Execute: Performing the Operation


The final stage is execution. Here, the CPU carries out the decoded
instruction using its arithmetic logic unit (ALU) and other components.

 Fetching any necessary data from registers or memory as specified by the


instruction.

 Performing the operation defined by the opcode (e.g., adding two


numbers, moving data from one place to another, or branching to a
different part of the program).

 Storing the result back in a register or memory location as required.


This stage effectively completes the task defined by the original instruction,
modifying the system’s state accordingly.

COMPONENTS THAT INVOLVED IN THIS PROCESS


To fully understand the fetch-decode-execute cycle, it’s essential to know the
primary components involved

 Program Counter (PC): Keeps track of the next instruction’s address.

 Instruction Register (IR): Holds the current instruction being


executed.

 Control Unit (CU): Decodes the instruction and manages the execution
process.

 Arithmetic Logic Unit (ALU): Performs arithmetic and logical


operations.

 Registers: Small, fast storage locations for temporary data and


instructions.

 Memory: Stores the program and data needed by the CPU.

EXAMPLE
Consider a simple instruction to add two numbers stored in memory.

1. Fetch: The PC points to the memory address 100. The instruction at this
address, ADD A, B, is fetched and stored in the IR.

2. Decode: The CU decodes the ADD A, B instruction, identifying it as an


addition operation and determining the locations of operands A and B.

3. Execute: The CPU fetches the values from locations A and B, performs
the addition, and stores the result in a specified register or memory
location.

You might also like