0% found this document useful (0 votes)
1 views

AOS Pratical File3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1 views

AOS Pratical File3

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

Practical No.

AIM- UNIX basic commands

Theory-

What are Unix Commands?

Unix commands are instructions that are used to interact with the Unix operating system. They are
used to perform various tasks, such as navigating the file system, managing files and directories, and
executing programs.

Types of Unix Commands

1. Internal Commands: Built-in commands that are part of the shell.

2. External Commands: Programs that are stored in files and executed by the shell.

Basic Unix Command Structure

1. Command: The name of the command.

2. Options: Flags or switches that modify the behavior of the command.

3. Arguments: Files, directories, or other data that the command operates on.

Key Concepts

1. Shell: The program that interprets and executes Unix commands.

2. Terminal: The interface through which users interact with the shell.

3. Syntax: The rules that govern the structure of Unix commands.

Importance of Unix Commands


1. Efficient Navigation: Unix commands allow users to navigate the file system
efficiently.
2. File Management: Unix commands provide a powerful way to manage files and
directories.
3. Automation: Unix commands can be used to automate repetitive tasks.

Common Unix Commands

Some common Unix commands include:


- Navigation: cd, pwd, ls

- File Management: mkdir, rm, cp, mv

- File Viewing: cat, more, less

- User Management: whoami, su, sudo

- Miscellaneous: echo, man, clear, exit.

All commands will be run as the root user. Example command lines are prefixed with the # symbol
signifying the shell prompt; you should type in only text after the prompt.
Compiling the benchmark

The laboratory I/O benchmark source code has been preinstalled onto your RPi4 board. However, you
will need to build it before you can begin work. Once you have logged into your RPi4 (see Advanced
Operating Systems: Lab Setup), build the bundle: # make - C io
Benchmark arguments

If run without any parameters, the benchmark will list its potential arguments and default
settings: usage: io-benchmark -c|-r|-w [-Bdjqsv] [-b buffersize][-n iterations] [-t totalsize] path
Modes (pick one):

-c ’create mode’: create benchmark data file

-r ’read mode’: read() benchmark


-w ’write mode’: write() benchmark
Optional
flags: -B Run in bare mode: no preparatory activities
-d Set O_DIRECT flag to bypass buffer cache Enable
-g getrusage(2) collection

-j Output as JSON
-q Just run the benchmark, don’t print stuff out
-s Call fsync() on the file descriptor when complete Provide a
-v verbose benchmark description

-b buffersize Specify the buffer size (default: 16384)


-n iterations Specify the number of times to run (default: 1)
-t totalsize Specify the total I/O size (default: 16777216)

Cat Displays file contents

Cd Changes directory to dirname

Chgrp Change file group

Is Displays information about file type

Mkdir Creates a new directory dirname

Mv Moves(renames) an old name to new name

Pwd Prints current working directory

Tail Prints last few lines in a file

Touch Updates access and notification time of a file

Cmp Compares the content of two file

Cut Cut out selected fields of each line of a file

Vi Opens Vi text editor

Ex,edit Line editor


Practical No. 2

AIM: Implementation of Process Synchronization – (Reader-Writer Problem)

THEORY: In the Reader-Writer problem:


 Multiple readers can read the shared data simultaneously.
 Writers must have exclusive access to the shared data.
 The goal is to avoid race conditions and ensure proper synchronization.
We use semaphores to control access:
 x: Protects the readercount variable.
 y: Ensures mutual exclusion for writers.

ALGORITHM:
1. Initialize semaphores x and y to 1.
2. Reader Process:
o Wait on x and increment readercount.
o If it's the first reader, wait on y to block writers.
o Post x.
o Read data.
o Wait on x, decrement readercount.
o If it's the last reader, post y.
o Post x.
3. Writer Process:
o Wait on y.
o Write data.
o Post y.

PROGRAM CODE:
#include <semaphore.h>
#include <stdio.h>
#include <stdlib.h>
#include <pthread.h>

sem_t x, y;
pthread_t tid;
pthread_t writerthreads[100], readerthreads[100];
int readercount = 0;

void *reader(void* param)


{
sem_wait(&x);
readercount++;
if(readercount == 1)
sem_wait(&y); // First reader blocks writer
sem_post(&x);

printf("\n%d Reader is inside", readercount);


sleep(1); // Simulate reading

sem_wait(&x);
readercount--;
if(readercount == 0)
sem_post(&y); // Last reader allows writer
sem_post(&x);

printf("\n%d Reader is leaving", readercount + 1);


pthread_exit(NULL);
}

void *writer(void* param)


{
printf("\nWriter is trying to enter");
sem_wait(&y);

printf("\nWriter has entered");

sleep(1); // Simulate writing


printf("\nWriter is leaving");
sem_post(&y);
pthread_exit(NULL);
}

int main()
{
int n, i;
printf("Enter the number of readers and writers: ");
scanf("%d", &n);

sem_init(&x, 0, 1);
sem_init(&y, 0, 1);

for(i = 0; i < n; i++)


{
pthread_create(&readerthreads[i], NULL, reader, NULL);
pthread_create(&writerthreads[i], NULL, writer, NULL);
}

for(i = 0; i < n; i++)


{
pthread_join(readerthreads[i], NULL);
pthread_join(writerthreads[i], NULL);
}
sem_destroy(&x);
sem_destroy(&y);

return 0;
}

SAMPLE OUTPUT:
Practical No.3

AIM: Implementation of mutex and semaphore.

THEORY:
 Producer-Consumer Problem is a classic example of a multi-process synchronization problem.
 A producer adds items to a buffer, and a consumer removes items.
 The buffer has a limited size. Synchronization is required to avoid:
o Overwriting buffer (Producer should wait if the buffer is full)
o Reading empty buffer (Consumer should wait if the buffer is empty)

We use:
 mutex – mutual exclusion lock to protect critical sections.
 full – counts how many items are in the buffer.
 empty – counts how many empty spaces are available.

ALGORITHM:
1. Initialize mutex = 1, full = 0, empty = buffer_size, x = 0

2. Display a menu:
1. Producer
2. Consumer
3. Exit

3. On Producer:
o If mutex == 1 and empty > 0, produce an item.
o Otherwise, buffer is full.

4. On Consumer:
o If mutex == 1 and full > 0, consume an item.
o Otherwise, buffer is empty.
5. Use wait() to decrement and signal() to increment semaphores safely.

PROGRAM CODE:

#include <stdio.h>
#include <stdlib.h>

int mutex = 1;
int full = 0;
int empty = 3; // buffer size
int x = 0; // item counter
int wait(int s) {
return --s;
}

int signal(int s) {
return ++s;
}
void producer() {
mutex = wait(mutex);
full = signal(full);
empty = wait(empty);
x++;
printf("\nProducer produces item %d", x);
mutex = signal(mutex);
}
void consumer() {
mutex = wait(mutex);
full = wait(full);
empty = signal(empty);
printf("\nConsumer consumes item %d", x);
x--;
mutex = signal(mutex);
}

int main() {
int n;

printf("\n--- Producer-Consumer Using Mutex & Semaphore ---");


printf("\n1. Producer\n2. Consumer\n3. Exit");

while (1) {
printf("\n\nEnter your choice: ");
scanf("%d", &n);

switch (n) {
case 1:
if ((mutex == 1) && (empty != 0))
producer();
else
printf("\nBuffer is full!!");
break;

case 2:
if ((mutex == 1) && (full != 0))
consumer();
else
printf("\nBuffer is empty!!");
break;

case 3:
printf("\nExiting...\n");
exit(0);

default:
printf("\nInvalid choice!");
}
}
return 0;
}

OUTPUT:
Practical No. 4

AIM: Implementation of IPC using shared memory.

THEORY:
 Shared memory is the fastest form of IPC because the data is not copied between processes.
 shmget() creates or accesses a shared memory segment.
 shmat() attaches the shared memory to the process’s address space.
 shmdt() detaches the shared memory.
 shmctl() is used to control or delete shared memory.

PROGRAM CODE:
 Writer Code (writer.c):
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/shm.h>
#include <string.h>

int main() {
void *shared_memory;
char buff[100];
int shmid;

// Create shared memory segment


shmid = shmget((key_t)2345, 1024, 0666 | IPC_CREAT);
if (shmid == -1) {
perror("shmget failed");
exit(1);
}
printf("Key of shared memory is %d\n", shmid);
// Attach shared memory
shared_memory = shmat(shmid, NULL, 0);
if (shared_memory == (void *)-1) {
perror("shmat failed");
exit(1);
}

printf("Process attached at %p\n", shared_memory);


printf("Enter some data to write to shared memory: ");
fgets(buff, 100, stdin); // Use fgets for better input handling

strcpy((char *)shared_memory, buff);


printf("You wrote: %s\n", (char *)shared_memory);

return 0;
}

 Reader Code (reader.c):


#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/shm.h>
#include <string.h>

int main() {
void *shared_memory;
int shmid;

// Access shared memory segment


shmid = shmget((key_t)2345, 1024, 0666);
if (shmid == -1) {
perror("shmget failed");
exit(1);

// Attach shared memory


shared_memory = shmat(shmid, NULL, 0);
if (shared_memory == (void *)-1) {
perror("shmat failed");
exit(1);
}

printf("Reader attached to shared memory at %p\n", shared_memory);


printf("Data read from shared memory: %s\n", (char *)shared_memory);

// Detach and remove shared memory


shmdt(shared_memory);
shmctl(shmid, IPC_RMID, NULL); // Delete the shared memory segment

return 0;
}

OUTPUT:
Writer Terminal:

Reader Terminal:
Practical No. 5

Aim: Implementation of page replacement algorithm.

Theory:
 A Page Replacement Algorithm is used when a page fault occurs and memory is full.
 FIFO (First-In-First-Out) algorithm removes the page that has been in memory the longest.

PROGRAM CODE:
#include <iostream>
#include <deque>
#include <unordered_set>
using namespace std;

int main() {
int capacity = 4;
int arr[] = {7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2};
int n = sizeof(arr) / sizeof(arr[0]);

unordered_set<int> pages_in_memory;
queue<int> page_order;
int page_faults = 0;

for (int i = 0; i < n; i++) {


int page = arr[i];

// Page not found in memory


if (pages_in_memory.find(page) == pages_in_memory.end()) {

page_faults++;

if (pages_in_memory.size() == capacity) {
int oldest_page = page_order.front();
page_order.pop();
pages_in_memory.erase(oldest_page);
}

pages_in_memory.insert(page);
page_order.push(page);
}
// If page is already in memory, do nothing (no page fault)
}

cout << "Total Page Faults: " << page_faults << endl;
return 0;
}

OUTPUT:
Program No. 6

Aim: To write a C program to implement IPC using the shared memory.

Theory:
 Shared Memory is an IPC mechanism where multiple processes can access the same memory segment.
 It is one of the fastest IPC techniques.
 Common system calls used:
o ftok() – Generates a unique key.
o shmget() – Allocates a shared memory segment.
o shmat() – Attaches the shared memory to the process’s address space.
o shmdt() – Detaches the shared memory.
o shmctl() – Performs control operations (like remove).

PROGRAM CODE:
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <sys/ipc.h>
#include <sys/shm.h>
#include <sys/types.h>

#define SEGSIZE 100

int main() {
int shmid;
key_t key;
char *segptr;
char buff[] = "poooda……";
// Generate unique key

key = ftok(".", 's');


if (key == -1) {
perror("ftok");
exit(1);
}

// Create or get shared memory segment


shmid = shmget(key, SEGSIZE, IPC_CREAT | IPC_EXCL | 0666);
if (shmid == -1) {
// If already exists, access it
shmid = shmget(key, SEGSIZE, 0);
if (shmid == -1) {
perror("shmget");
exit(1);
}
} else {
printf("Creating a new shared memory segment.\n");
printf("SHMID: %d\n", shmid);
}

// Attach shared memory


segptr = (char *)shmat(shmid, 0, 0);
if (segptr == (char *)-1) {
perror("shmat");
exit(1);
}

// Write to shared memory


printf("Writing data to shared memory...\n");
strcpy(segptr, buff);
printf("DONE\n");
// Read from shared memory
printf("Reading data from shared memory...\n");
printf("DATA: %s\n", segptr);
printf("DONE\n");

// Detach shared memory


if (shmdt(segptr) == -1) {
perror("shmdt");
exit(1);
}
// Remove shared memory segment
printf("Removing shared memory segment...\n");
if (shmctl(shmid, IPC_RMID, 0) == -1) {
printf("Can't remove shared memory segment...\n");
} else {
printf("Removed successfully.\n");
}
return 0;
}

OUTPUT:
Practical No. 7

Aim: To write a Bin/bash scripts.

Theory:
A Bash script is a plain text file containing a series of commands that the shell executes in sequence. The line
#!/bin/bash at the top (called a shebang) tells the operating system to use the Bash interpreter.

(a) Hello World Script


Code (hello.sh):
#!/bin/bash
# A simple Hello World script

echo "Hello World!"

OUTPUT:

(b) Arithmetic Operation Script


Code (sum.sh):
#!/bin/bash
# Script to perform addition

var=$((3 + 9))
echo "The sum is: $var"

OUTPUT:
Explanation:
 #!/bin/bash – Declares the Bash shell to interpret the script.
 echo – Prints output to the terminal.
 $((expression)) – Evaluates the arithmetic expression inside.
Practical No. 8

AIM: Implementation of Simple Shell Programs.

THEORY:
A shell script is a program executed by the command-line shell to automate tasks by running a series of
commands.
Factorial:
The factorial of a number n (written as n!) is the product of all positive integers from 1 to n. It is usually
calculated using loops.
Even and Odd Numbers:
An even number is divisible by 2 with no remainder; an odd number leaves a remainder of 1 when divided by 2.
The modulus operator % is used to check this.
Loops and Conditionals:
Bash uses loops like for and while to repeat commands, and if statements to execute code based on conditions.

PROGRAM CODE:
(a) Program to Find Factorial of a Number
#!/bin/bash
# Script to find factorial of a number

echo "Enter a number:"


read num
fact=1

for ((i=2; i<=num; i++))


do
fact=$((fact * i))

done
echo "Factorial of $num is: $fact"

OUTPUT:

(b) Program to Find Even and Odd Numbers Up to a Given Number


#!/bin/bash
# Script to print even and odd numbers up to n

echo "Enter a number:"


read n

echo "Even Numbers:"


i=1
while [ $i -le $n ]
do
if [ $((i % 2)) -eq 0 ]; then
echo "$i"
fi
((i++))
done

echo "Odd Numbers:"


i=1
while [ $i -le $n ]
do
if [ $((i % 2)) -ne 0 ]; then
echo "$i"

fi
((i++))
Done

OUTPUT:
Practical No. 9

Aim: Write about Latency and TCP bandwidth.

Introduction:
The Transmission Control Protocol (TCP) is a fundamental communication protocol that provides reliable,
ordered, and error-checked delivery of a stream of data between applications running on hosts in an IP network.
TCP achieves this through a complex series of mechanisms including connection setup, data transfer, and
connection termination. Once a TCP connection reaches the ESTABLISHED state, it primarily focuses on
efficient data transfer.

TCP Rate Control Mechanisms:


TCP employs two critical mechanisms to control the flow of data and maintain network stability:
1. Flow Control:
This mechanism ensures that the sender does not overwhelm the receiver by sending more data than the
receiver’s buffer can handle. The receiver advertises a window size through acknowledgments,
indicating how much buffer space it currently has available. This advertised window size determines the
maximum amount of unacknowledged data the sender can transmit.
Modern TCP stacks can dynamically adjust socket buffer sizes based on system and network conditions
to optimize performance and avoid limiting throughput due to small fixed buffers.
2. Congestion Control:
Congestion control protects the network from becoming overloaded by controlling the amount of data
injected into the network. When packet loss is detected, typically via duplicate acknowledgments or
timeouts, TCP assumes network congestion and reduces its sending rate accordingly.
Key elements include:
o Slow Start: Initially, TCP increases its congestion window exponentially to quickly find the
available bandwidth.
o Congestion Avoidance: After detecting congestion, TCP reduces the window size and then
increases it more slowly (linearly) to avoid overloading the network again.
o Fast Retransmit and Recovery: Mechanisms to quickly recover from packet loss without waiting
for timeouts.
Together, these two windows — the receiver window and the congestion window — limit the amount of
unacknowledged data in the network. The effective window is the minimum of these two values.

Key Concepts: Latency and Bandwidth


 Latency:
Latency is the delay from when a data packet leaves one endpoint until it reaches the other endpoint. It is
often measured as the Round-Trip Time (RTT) — the time for a packet to travel to the destination and
for the acknowledgment to return.
RTT directly affects TCP performance because acknowledgments pace the growth of the congestion
window; the longer the RTT, the slower the window grows.
 Bandwidth:
Bandwidth refers to the maximum data transfer rate of a network path, usually measured in bits or bytes
per second. TCP attempts to utilize available bandwidth by increasing the congestion window until signs
of congestion (packet loss) occur, prompting it to reduce the sending rate.

Relationship Between Latency and Bandwidth


While latency and bandwidth are different metrics, they interplay closely in TCP:
 A high-bandwidth, low-latency link allows TCP to rapidly fill the pipeline with data, maximizing
throughput.
 Conversely, high latency limits how fast TCP can send new data since it must wait for
acknowledgments.
This interplay is often described by the Bandwidth-Delay Product (BDP), which represents the optimal
amount of data that should be "in flight" on the network to keep the pipeline full.

Visualizing TCP Performance


Graphs plotting TCP bandwidth over time provide insight into how TCP adapts to network conditions. Common
visualizations include:
 Time vs Bandwidth: Showing bandwidth fluctuations during slow start, congestion events, and steady
state.
 Window Sizes: Overlaying congestion window and advertised window sizes to understand flow and
congestion control dynamics.
 Events: Annotating events like packet loss, retransmissions, or transitions out of slow start can help
diagnose performance bottlenecks.

The Benchmark
In this lab, you will run an IPC benchmark over TCP to observe and measure the effect of different socket
buffer sizes on TCP throughput and latency. By running the benchmark both with default buffer sizes and with
manually set socket buffer sizes, you will gain insights into how buffer tuning affects performance.
 The benchmark sends data over TCP on a fixed port (10141), so data segments and acknowledgments can
be tracked by their source and destination ports.
 Ensure the system's maximum socket buffer sizes are increased to allow for larger buffer settings and
prevent artificial limits on throughput.

Extended Practical Tasks


1. Measure TCP Throughput:
Run the benchmark and collect throughput data over time. Plot graphs showing bandwidth to observe
slow start and congestion avoidance behavior.
2. Adjust Socket Buffer Sizes:
Modify the socket buffer sizes using system calls or socket options to see how larger buffers affect
TCP’s ability to utilize available bandwidth.
3. Analyze Latency Effects:
Measure round-trip times using tools like ping or TCP timestamp options and correlate with throughput
changes.
4. Observe Congestion Events:
Use packet capture tools (e.g., Wireshark, tcpdump) to identify retransmissions, duplicate ACKs, and
window size updates.
5. Compare with Different Network Conditions:
Simulate or test over networks with varying latency and bandwidth to observe TCP’s adaptive behavior.

Summary
This lab explores the crucial roles of latency and bandwidth in TCP’s operation. By understanding and
measuring TCP’s rate control mechanisms and how they interact with network conditions, you gain valuable
insights into network performance tuning and protocol behavior.

You might also like