0% found this document useful (0 votes)
6 views

Unix

UNIX is a pioneering operating system created in the 1970s, foundational to many modern systems like Linux and macOS, known for its multiuser and multitasking capabilities, robust security, and portability. It features a layered architecture consisting of hardware, kernel, shell, and application layers, with a focus on process and file management. UNIX has influenced software development significantly, despite its complexity for beginners and varying availability across versions.

Uploaded by

Palak Jaiswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Unix

UNIX is a pioneering operating system created in the 1970s, foundational to many modern systems like Linux and macOS, known for its multiuser and multitasking capabilities, robust security, and portability. It features a layered architecture consisting of hardware, kernel, shell, and application layers, with a focus on process and file management. UNIX has influenced software development significantly, despite its complexity for beginners and varying availability across versions.

Uploaded by

Palak Jaiswal
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 270

MODULE-1 INTRODUCTION TO UNIX

UNIX OPERATING SYSTEM


Summary of UNIX
UNIX is a groundbreaking operating system developed in the
1970s by Ken Thompson, Dennis Ritchie, and others at AT&T
Bell Labs. It serves as the foundation for many modern operating
systems like Linux, macOS, and POSIX-compliant systems.
Initially designed for developers, UNIX played a crucial role in
software development and computing environments.
Key Features:
• Multiuser & Multitasking: Supports multiple users and
simultaneous processes.
• Shell Scripting: Enables automation through powerful
command-line scripts.
• Security: Implements robust security models with user
permissions.
• Portability: Runs on various hardware platforms.
• Process Tracking & Communication: Manages CPU usage,
disk space, and supports inter-user communication.
System Structure:
UNIX follows a layered architecture:
1. Hardware – The physical components.
2. Kernel – The core of the system, managing memory, files,
and processes.
3. Shell – Acts as an interface between users and the kernel,
processing commands.
4. Application Layer – Executes external applications.
UNIX vs. Linux:
• Linux is an open-source UNIX-like OS, whereas UNIX is
proprietary.
• Linux supports GUI and CLI, while traditional UNIX is
mostly CLI-based.
• Linux is more portable and flexible than UNIX.
Advantages of UNIX:

Highly stable and secure


Scalable and flexible for various computing needs
Efficient command-line interface for advanced users
Disadvantages of UNIX:

Complex for beginners


Costly in some versions
Limited software availability and lack of standardization
across versions
Despite its learning curve, UNIX remains widely used in
enterprise computing, scientific research, and web servers,
influencing modern OS development significantly.
UNIX ARCHITECTURE
Well, we know that Unix is an Operating System. Like any other
Operating System, Unix also acts as an interface between the
user and the hardware, i.e. the user and the computer itself. Unix
mainly focuses on the concept of Kernel and Shell division of an
Operating System due to which it is so powerful.
• Kernel: The kernel is a program in the Operating System
software that keeps on running in the background when the
machine is on and it is responsible for allocating the
resources to the processes, handling and coordinating the
multiple processes running together.
• Shell: However, since the kernel is such an essential
program in the Operating System and performs almost all of
the major tasks, it should not be directly accessible to the
user. So, there is a program between the user and the kernel
that helps the user interact with the kernel. This program is
called the shell.
The Unix architecture has 4 layers. These layers are as shown
below:
Hardware: Hardware is the most simple and least powerful layer
in the Unix Architecture. Hardware is the components that are
humanly visible. Whatever hardware is connected to a Unix
operating system-based machine, comes in the hardware layer.
Kernel: This is the most powerful layer of the Unix architecture.
The kernel is responsible for acting as an interface between the
user and the hardware for the effective utilization of the hardware.
The kernel handles the hardware effectively by using the device
drivers. The kernel is also responsible for process management.
So, the main 2 features of the kernel are process management
and file management.
• Process Management: The processes that execute within the
operating system require a lot of management in terms of
memory being allocated to them, the resource allocation to
the process, process synchronization, etc. All this is done by
the Kernel in Unix OS. This is done using various Operating
System Techniques like paging, framing, virtual memory,
swapping, context-switching, etc.
• File Management: File management involves managing the
data stored in the files. This also includes the transmission
of data stored in these files to the processes as and when
they request it.
Shell: We understood the importance of the kernel and that it
handles most of the important and complex tasks of Unix OS.
Since the kernel is such an important program of the Unix
Operating System, its direct access to the users can be
dangerous. Hence, the Shell comes into the picture. Shell is an
interpreter program that interprets the commands entered by the
user and then sends the requests to the kernel to execute those
commands. When the execution of the process is completed, the
shell again sends a request to the kernel to display the
program/information on the screen to the user. So, Kernel is an
interface between the user and the hardware and the Shell is an
interface between the user and the Kernel. The shell can be used
for opening a file, writing into the files, executing programs, etc.
There are 3 types of shells in the Unix Operating system.

• Bourne Shell (sh): It is the most widely available shell on


Unix OS devices across the world. This was the first shell
available in the Unix OS. It is simply called a shell.
• C Shell (csh): The University of California (Berkeley)
developed C Shell is another Unix shell that removes some
of the obsolete features or problems from the Bourne Shell.
So, it enhances the performance of the Bourne Shell.
• Korn Shell (ksh): The name of the Korn Shell is based on its
creator, David Korn. This shell enhances the C shell further
by removing the shortcomings of the C shell and also
enhancing the user interaction of the Bourne Shell.
Applications/Application Programs: The last layer of the Unix
architecture is the Application Program layer. As the name
suggests, this outermost layer of Unix Architecture is responsible
for executing the application programs.
In Unix, files and processes are core concepts. Here's a simple
overview:

1. Files in Unix
• Everything in Unix is treated as a file: documents,
directories, hardware devices, and even running processes
(through /proc).
• Files are organized in a single hierarchical directory
structure (starting from /, the root).
Types of Files:
• Regular files: text files, images, binaries, etc.
• Directories: folders that contain other files.
• Special files:
o Character device files (e.g., keyboard input)
o Block device files (e.g., hard drives)
• Links:
o Hard links: multiple directory entries point to the same
inode.
o Symbolic links (symlinks): like shortcuts.
• Sockets and named pipes: used for inter-process
communication (IPC).
Important file commands:
• ls — list files
• cat, less, more — view contents
• cp, mv, rm — copy, move, remove
• chmod, chown — change permissions and ownership

2. Processes in Unix
• A process is an instance of a running program.
• Each process has a unique Process ID (PID).
• Processes can spawn child processes (parent and child
relationship).
States of a Process:
• Running: currently executing.
• Sleeping: waiting for some event (like input).
• Stopped: process is halted.
• Zombie: completed execution but still has an entry in the
process table.
Important process commands:
• ps — view currently running processes.
• top — dynamic real-time view of system processes.
• kill PID — send a signal to a process (default is SIGTERM to
terminate).
• kill -9 PID — forcefully kill a process (SIGKILL).
• nice, renice — change the priority of a process.
• bg, fg, jobs — manage background and foreground jobs.

3. Relationship Between Files and Processes


• Processes interact heavily with files.
• Processes open files, read from or write to them.
• File Descriptors are used internally:
o 0 → Standard Input (stdin)
o 1 → Standard Output (stdout)
o 2 → Standard Error (stderr)
• Processes inherit file descriptors from their parent
processes.

SYSTEM CALLS
The interface between a process and an operating system is
provided by system calls. In general, system calls are available as
assembly language instructions. They are also included in the
manuals used by the assembly level programmers.
Unix System Calls
System calls in Unix are used for file system control, process
control, interprocess communication etc. Access to the Unix
kernel is only available through these system calls. Generally,
system calls are similar to function calls, the only difference is
that they remove the control from the user process.
There are around 80 system calls in the Unix interface currently.
Details about some of the important ones are given as follows -
System
Description
Call

This checks if a calling process has access to the


access()
required file

The chdir command changes the current directory of the


chdir()
system

chmod() The mode of a file can be changed using this command

chown() This changes the ownership of a particular file

This system call sends kill signal to one or more


kill()
processes

A new file name is linked to an existing file using link


link()
system call.

open() This opens a file for the reading or writing process

The pause call suspends a file until a particular signal


pause()
occurs.

stime() This system call sets the correct time.

times() Gets the parent and child process times

alarm() The alarm system call sets the alarm clock of a process
System
Description
Call

fork() A new process is created using this command

chroot() This changes the root directory of a file.

exit() The exit system call is used to exit a process.

Features of unix
1. Multiuser System
• Many users can work at the same time.
• Resources (CPU, memory) are shared fairly.
2. Multitasking System
• One user can run many tasks together.
• Switch between tasks or run them in the background.
3. Building-Block Approach
• Small, simple commands that do one job well.
• Commands can be combined using pipes (|).
4. UNIX Toolkit
• Collection of powerful tools: text editors, compilers, network
programs, admin tools.
5. Pattern Matching
• Special symbols (*, ?) to match file names.
• Regular expressions for advanced search.
6. Programming Facility
• Shell is both a command interpreter and a programming
language.
• You can write shell scripts to automate tasks.
7. Documentation
• man command gives help on any command.
• Tons of resources and FAQs available online.
8. Portable
• Easily runs on different types of hardware.
9. Command-Line Interface (CLI)
• Powerful text-based control through shells (like bash, sh).
10. File System
• Organized in a hierarchical (tree-like) structure, starting
from / (root).
11. Networking
• Built-in strong support for networking (great for servers).
12. Security
• Strong security with permissions and access controls.
13. Open Source
• Many UNIX versions are open-source and freely available.
14. Scalable
• Can work on small devices and huge data centers.
POSIX
What is POSIX?
POSIX stands for Portable Operating System Interface.
It is not an operating system itself — it's a standard.
POSIX defines rules and guidelines for how an operating
system (like UNIX) should behave so that programs written
for one system can work on others without needing major
changes.

Why POSIX Exists


• In the 1980s, many UNIX versions (like BSD, System V,
etc.) were a little different.
• Software written for one version of UNIX often did not
work on another.
• POSIX was created to standardize UNIX behavior so
programs could be portable (easily moved from one
UNIX to another).

What POSIX Covers


• System calls (how programs ask the OS for things, like
reading a file)
• Command-line utilities (like ls, cp, mv)
• Shell scripting standards
• Programming APIs (functions like fork(), exec(), read(),
etc.)

Examples of POSIX-Compliant Systems


• Linux
• macOS
• Solaris
• AIX
• FreeBSD
(Windows is not POSIX-compliant by default, but tools like
Cygwin can add a POSIX environment.)

In Short:

POSIX = Set of rules that make sure UNIX-like systems


behave similarly, helping software run across different
platforms.
SINGLE UNIX SPECIFICATIONS
What is the Single UNIX Specification (SUS)?
The Single UNIX Specification (SUS) is a standard that
defines what it means to be a "real" UNIX operating
system.
It is based on POSIX, but is broader and stricter.
It ensures that software can run across different UNIX
systems without needing to be rewritten.

Why was SUS Created?


• In the 1990s, many UNIX versions (Solaris, HP-UX,
AIX, etc.) had differences.
• Developers wanted one common standard.
• SUS was created by a group called The Open Group
to unify UNIX systems.

Main Parts of SUS


It defines four major areas:
Area What it Covers

Base Definitions Terms, concepts, and conventions.

System System calls and libraries (fork(), read(),


Interfaces etc.).

Standard shell behavior (like sh) and


Shell & Utilities
commands (ls, cp, etc.).

Networking Network-related programming (e.g.,


Services sockets).
What Happens if a System Follows SUS?
• If an operating system meets all the SUS rules, it can
be officially called "UNIX®".
• Example of certified UNIX systems:
o IBM AIX
o HP-UX
o Oracle Solaris
(Linux is POSIX-compliant but not officially UNIX®
certified.)

SUS and POSIX — Quick Comparison


Aspect POSIX SUS

Basic standards Broader, more


Scope
(mainly APIs) detailed, stricter

Owner IEEE The Open Group

Can officially call


Certification No official UNIX title
system "UNIX®"

In Short:
SUS = The official standard that defines what UNIX
really is.
POSIX = A part of SUS focused on portability.
INTERNAL AND EXTERNAL COMMANDS
Internal vs External Commands in UNIX
Internal Commands
• These are built into the shell itself (like bash, sh).
• When you run them, the shell executes them
directly — no separate file is needed.
• They are faster because no separate process is
created.
• Examples:
o cd — change directory
o pwd — print working directory
o echo — display text
o exit — close the shell
o set, unset, export — environment control

➡ Internal commands = Part of the shell program itself.

External Commands
• These are separate executable files stored on disk.
• When you run them, the shell finds and starts a new
process to execute them.
• They are usually located in system directories like
/bin, /usr/bin, /usr/local/bin.
• Examples:
o ls — list files
o cp — copy files
o mv — move files
o cat — view file contents
o grep — search text
o chmod — change file permissions

➡ External commands = Separate files stored outside


the shell.

Quick Summary Table


Feature Internal Command External Command

Stored as files (e.g.,


Location Built into the shell
/bin/ls)

Faster (no new Slower (starts a new


Speed
process) process)

Examples cd, pwd, echo ls, cat, grep

Change by modifying Change by modifying


Modification
shell file

How to Check?
• Internal: Use help <command> → if it shows help, it's
internal.
• External: Use which <command> → if it shows a file
path, it's external.
Example:
$ help cd # shows help → internal
$ which ls # shows /bin/ls → external

In Short:

Internal = Built into shell, fast.


External = Separate program, starts a new process.
UTILITIES OF UNIX
The image you shared lists some important UNIX
utilities. Here's a clean and simple version for you:

Common Utilities in UNIX


Utility Purpose

cal Shows a calendar.

date Displays the system date and time.

echo Displays a message or text.

bc Acts as a calculator (basic calculator tool).


Utility Purpose

passwd Changes a user's password.

who Shows who is currently logged in.

uname Displays system information (like OS name).

Shows the file name of the terminal connected to


tty
standard input.

In Short:
These are simple command-line tools that make
system tasks easy!

MODULE-2 UNIX FILE SYSTEM


File System:
The Unix File System is a hierarchical structure used to
organize and store files and directories efficiently. At the
top is the root directory (/), from which all files and
directories descend. Everything in Unix, including
hardware devices, is treated as a file. Devices are
represented as special files in the /dev directory,
categorized as either block or character special files. Each
file has a unique inode and can be accessed via system
calls using unique pathnames. The system also supports
symbolic links for flexible file organization and uses
permissions to manage file access. Its robust, tree-like
structure makes it fundamental to Unix-based operating
systems and essential knowledge for system navigation
and management.
File on Unix Operating System:
• In Unix everything is treated as a file, even devices are
also treated as a special file.
• All devices are represented by files called special files
that are located in/dev directory.
• These are access in the same way as regular file.
• Device files has two category: 1)Block Special File , 2)
Character Special File
• In Block Special File data gets transfer in terms of
block. So it has characteristics similar to disk.
• In Character Special File data gets transfer by stream
of bits in sequential order like keyboard.
• Every file on a Unix System has a Unique Inode.
• Processes access files by well defined set of system
call.
• Files can be specifies by a character string called as
path name.
• Each Pathname is unique and it is converted to an
Inode.
Types of File:
The UNIX files system contains several different types of
files
Ordinary Files
An ordinary file is a file on the system that contains data,
text, or program instructions.
• Used to store your information, such as some text you
have written or an image you have drawn. This is the
type of file that you usually work with.
• Always located within/under a directory file.
• Do not contain other files.
• In long-format output of ls -l, this type of file is
specified by the “-” symbol.
Directories
Directories store both special and ordinary files. For users
familiar with Windows or Mac OS, UNIX directories are
equivalent to folders. A directory file contains an entry for
every file and subdirectory that it houses. If you have 10
files in a directory, there will be 10 entries in the directory.
Each entry has two components. (1) The Filename (2) A
unique identification number for the file or directory
(called the inode number)
• Branching points in the hierarchical tree.
• Used to organize groups of files.
• May contain ordinary files, special files or other
directories.
• Never contain “real” information which you would
work with (such as text). Basically, just used for
organizing files.
• All files are descendants of the root directory, (
named / ) located at the top of the tree.
In long-format output of ls –l , this type of file is specified
by the “d” symbol.
Special Files
Used to represent a real physical device such as a printer,
tape drive or terminal, used for Input/Output (I/O)
operations. Device or special files are used for device
Input/Output(I/O) on UNIX and Linux systems. They
appear in a file system just like an ordinary file or a
directory. On UNIX systems there are two flavors of special
files for each device, character special files and block
special files :
• When a character special file is used for device
Input/Output(I/O), data is transferred one character
at a time. This type of access is called raw device
access.
• When a block special file is used for device
Input/Output(I/O), data is transferred in large fixed-
size blocks. This type of access is called block device
access.
For terminal devices, it’s one character at a time. For disk
devices though, raw access means reading or writing in
whole chunks of data – blocks, which are native to your
disk.
• In long-format output of ls -l, character special files
are marked by the “c” symbol.
• In long-format output of ls -l, block special files are
marked by the “b” symbol.
Pipes
UNIX allows you to link commands together using a pipe.
The pipe acts a temporary file which only exists to hold
data from one command until it is read by another.A Unix
pipe provides a one-way flow of data.The output or result
of the first command sequence is used as the input to the
second command sequence. To make a pipe, put a
vertical bar (|) on the command line between two
commands.For example: who | wc -l In long-format output
of ls –l , named pipes are marked by the “p” symbol.
Sockets
A Unix socket (or Inter-process communication socket) is
a special file which allows for advanced inter-process
communication. A Unix Socket is used in a client-server
application framework. In essence, it is a stream of data,
very similar to network stream (and network sockets), but
all the transactions are local to the filesystem. In long-
format output of ls -l, Unix sockets are marked by “s”
symbol.
Symbolic Link
Symbolic link is used for referencing some other file of the
file system.Symbolic link is also known as Soft link. It
contains a text form of the path to the file it references. To
an end user, symbolic link will appear to have its own
name, but when you try reading or writing data to this file,
it will instead reference these operations to the file it
points to. If we delete the soft link itself , the data file
would still be there.If we delete the source file or move it
to a different location, symbolic file will not function
properly. In long-format output of ls –l , Symbolic link are
marked by the “l” symbol (that’s a lower case L).
File Naming Conventions:
Here are the file naming conventions in Unix:

1. Case Sensitivity
• File names in Unix are case-sensitive.
o Example: File.txt, file.txt, and FILE.TXT are all
different files.
2. No Extension Requirement
• Unix does not require file extensions, though they
are often used for convenience.
o Example: A shell script can be named script,
script.sh, or even myscript.

3. Allowed Characters
• Most characters are allowed, except the forward
slash / (used as a directory separator) and the null
character (\0).
o Avoid using special characters like *, ?, |, <, >, &,
etc., as they have special meanings in the shell.

4. Hidden Files
• Files beginning with a dot (.) are hidden files.
o Example: .bashrc, .gitignore

5. Length Limits
• File names can be up to 255 characters long (varies
by file system).
• Full path length is typically limited to 4096
characters.
6. Naming Best Practices
• Use lowercase letters, underscores, or hyphens for
readability.
o Example: my_file.txt or my-file.txt
• Avoid spaces; if necessary, use quotes or escape
characters.
Parent Child Relationship:
In UNIX (and UNIX-like systems), the parent-child
relationship refers to how processes are organized and
created.

How It Works:
1. Process Creation via fork():
o In UNIX, when a process wants to create another
process, it uses the fork() system call.
o The original process is called the parent.
o The newly created process is the child.
2. Child Process:
o The child is a copy of the parent at the time of
creation (same code, data, and open files).
o It gets a unique process ID (PID).
o It may be programmed to replace itself with a
different executable using exec().
3. Process IDs:
o Each process has:
▪ PID – its unique identifier.
▪ PPID – parent process ID.
4. Tracking:
o The shell (like Bash) acts as a parent to the
commands you run.
o Use ps -ef or pstree to see relationships

Key Points:

Concept Description

fork() Creates a new child process

Replaces current process image with a new


exec()
one

getpid() Returns current process ID

getppid() Returns parent process ID

wait() / waitpid()Parent waits for child to finish


Zombie and Orphan Processes:
• Zombie: Child ends but parent hasn’t called wait() →
entry lingers in the process table.
• Orphan: Parent exits before the child → child is re-
parented to init (PID 1).

Home Variable
In UNIX and Linux systems, the HOME environment variable
represents the path to the current user's home directory.

Purpose of HOME:
• Stores the default directory where a user is taken after
logging in.
• It is used by many applications and scripts as the
default location for personal files and configuration
settings.
• Commands like cd without arguments use $HOME.
Common Use Cases:
Command Description

Takes you to the home directory (same as


cd
cd $HOME)

cd ~ Shortcut to go to the home directory


Command Description

echo $HOME Displays the home directory path

cp file.txt $HOME/
Copies file.txt to your home directory

Inode Number
What is an inode?
An inode is a file data structure that stores information
about any Linux file except its name and data.
What are inodes used for?
Data is stored on your disk in the form of fixed-size blocks. If
you save a file that exceeds a standard block, your computer
will find the next available segment on which to store the
rest of your file. Over time, that can get super confusing.
That’s where inodes come in. While they don’t contain any of
the file’s actual data, it stores the file’s metadata, including
all the storage blocks on which the file’s data can be found.
Information contained in an inode:
• File size
• Device on which the file is stored
• User and group IDs associated with the file
• Permissions needed to access the file
• Creation, read, and write timestamps
• Location of the data (though not the filepath)
Inodes are also independent of filenames. That means you
can copy a single file, rename it, and still have it point to the
same inode as the original.
Absolute and Relative Path
In Unix and Linux systems, paths define the location of files
and directories within the filesystem. They are crucial for
navigating and managing files efficiently. Paths can be
classified into two types: absolute paths and relative
paths. Let us have a better look in these paths.
What is an Absolute Path?
An absolute path is a full path that specifies the location of
a file or directory from the root directory (‘/’). It provides a
complete address that points directly to a file or directory,
regardless of the current working directory. This path type
always begins with the root directory, followed by
subdirectories, and ends with the desired file or directory
name.
Characteristics of Absolute Paths:
• Starts with a slash (/).
• Specifies a file location from the root directory.
• Does not depend on the current directory.
For Example:
If you want to access a file named ‘abc.sql’ located in the
directory ‘/home/kt’, you would use the following command:
$cat abc.sql
This command will work only if the file “abc.sql” exists in
your current directory. However, if this file is not present in
your working directory and is present somewhere else say in
/home/kt , then this command will work only if you will use it
like shown below:
cat /home/kt/abc.sql
In the above example, if the first character of a pathname is
‘/’, the file’s location must be determined with respect to
root. When you have more than one / in a pathname, for
each such /, you have to descend one level in the file system
like in the above ‘kt’ is one level below home, and thus two
levels below root.
What is a Relative Path?
A relative path specifies the location of a file or directory in
relation to the current working directory (often abbreviated
as pwd). It does not start with a slash (‘/’), and it utilizes
navigational shortcuts to refer to the file or directory.
Characteristics of Relative Paths:
• Does not begin with a slash (‘/’).
• Dependent on the current directory.
• Utilizes shortcuts like ‘.’ (current directory) and ‘..’
(parent directory) to navigate the filesystem.

Using . and .. in Relative Path-names


UNIX offers a shortcut in the relative pathname– that uses
either the current or parent directory as reference and
specifies the path relative to it. A relative path-name uses
one of these cryptic symbols:
• . (Dot): Represents the current directory.
• .. (Double Dots): Represents the parent directory.
Now, what this actually means is that if we are currently in
directory ‘/home/kt/abc’ and now you can use ‘..’ as an
argument to ‘cd’ to move to the parent directory /home/kt as
:
$pwd
/home/kt/abc
$cd .. ***moves one level up***
$pwd
/home/kt
Note: Now ‘/ ‘ when used with ‘..’ has a different meaning;
instead of moving down a level, it moves one level up:
$pwd
/home/kt/abc ***moves two level up***
$cd ../..
$pwd
/home
Example of Absolute and Relative Path
Suppose you are currently located in ‘home/kt’ and you want
to change your directory to ‘home/kt/abc’. Let’s see both the
absolute and relative path concepts to do this:
1. Changing directory with relative path concept:
$pwd
/home/kt
$cd abc
$pwd
/home/kt/abc
2. Changing directory with absolute path concept:
$pwd
/home/kt
$cd /home/kt/abc
$pwd
/home/kt/abc

Significanceof dot(.)anddotdot(..)
In computing, particularly in Unix-like operating systems, a
dot (.) refers to the current directory and dotdot (..) refers to
the parent directory.27 These entries are automatically
created in every directory except the root directory, and they
are used to navigate the file system.27 For example, if your
directory is /Users/Bob, then . refers
to /Users/Bob and .. refers to /Users.2
In programming, these special directories are often not
included in directory listings when they are not needed for
the specific task at hand. This can be achieved by
including QDir::NoDotAndDotDot in the filter.2
The use of dot and dotdot is universal in programming and
computers in general.2
In file systems like FAT and NTFS, dot and dotdot entries are
created for every non-root directory, although the exFAT
specification does not provide for them, Microsoft's
implementation of exFAT still returns these entries when
listing non-root directories.23
In summary, dot and dotdot are special directory entries
used for navigation and are present in most file systems and
directory structures.

Displaying pathname of the current


directory (pwd)
The ‘pwd,’ which stands for “print working directory.” In this
article, we will delve into the ‘pwd’ command, exploring its
functionality, usage, and various examples. It prints the path
of the working directory, starting from the root. pwd is shell
built-in command(pwd) or an actual binary(/bin/pwd). $PWD
is an environment variable that stores the path of the current
directory. This command has two flags.
Syntax of `pwd` command in Linux
The basic syntax of the ‘pwd’ command is
pwd [OPTIONS]
This command doesn’t have any arguments or options, but it
can accept flags for specific behavior.
Flags For Specific behavior in `pwd` command in Linux.
• The “-L” flag resolves symbolic links and prints the path
of the target directory.
• The default behavior of the shell built-in “pwd” is
equivalent to using “pwd -L”.
• Mention the “-P” flag, which displays the actual path
without resolving symbolic links.
• The default behavior of the binary “/bin/pwd” is the
same as using “pwd -P”
pwd -L: Prints the symbolic path.
pwd -P: Prints the actual path.
How to Display the Current Working Directory in Linux
1. Displaying the Current Working Directory Using Built-in
pwd (pwd):
To print the current working directory, simply enter:

Display the Current Working Directory


The output will be the absolute path of your current location
in the file system.
In the given example the directory /home/shital/logs/ is a
symbolic link for a target directory /var/logs/
2. Displaying the Current Working Directory Using Binary
pwd (/bin/pwd):

Display the Current Working Directory


The default behavior of Built-in pwd is the same as pwd -L.
Using “pwd -L” to obtain the symbolic path of a directory
containing a symbolic link.
The default behavior of /bin/pwd is the same as pwd -P.
Utilizing “pwd -P” to display the actual path, ignoring
symbolic links.
3. The $PWD Environment variable.
The $PWD environment variable is a dynamic variable that
stores the path of the current working directory. It holds the
same value as ‘pwd -L’ – representing the symbolic path.

$PWD
Executing this command prints the symbolic path stored in
the $PWD environment variable

Changing the current directory(cd)


Cd is the abbreviation or synonym for chdir. It is a command
found inside the Windows Command Processor (cmd) that
allows for change of the current working directory of a shell
instance. The CWD (Current Working Directory) is a path (of
a directory) inside the file system, where the shell is
currently working. The current working directory is essential
for resolving relative paths. Cd is a generic command found
in the Command Interpreter of most operating systems.
Description of the Command :
Displays the name of or changes the current directory.
CHDIR [/D] [drive:][path]
CHDIR [..]
CD [/D] [drive:][path]
CD [..]

.. Specifies that you want to change to the parent directory.


Type CD drive: to display the current directory in the
specified drive.
Type CD without parameters to display the current drive and
directory.
Use the /D switch to change the current drive in addition to
changing the current directory for a drive.
• Some of the output is truncated due to its large length.
• In order to obtain the above text execute the cd
/? command on cmd.
• It should be noted that chdir is an alias for cd, and
therefore can be replaced for all of its occurrences.
Using the Command :
1. Displaying the Current Working Directory :
Displaying the current working directory is not generally
not required on cmd. This is because the default prompt
in cmd displays the Current drive and path (CWD) along
with the greater than sign ( > ) at all times ($P$G code).
But for the sake of completeness, we would be
describing it as well. To display the Current Working
Directory, execute the cd command without any
arguments.
Syntax :
cd

Apparent from the above output, it is not necessary for us to


print the cwd as it is already being displayed by the prompt.
Make directory(mkdir)
To create a directory in Linux or Unix, you can use
the mkdir command followed by the name of the directory
you want to create. For example, to create a new directory
called newdir, you would run the following command:
mkdir newdir
You can also create multiple directories at once by
specifying their names as arguments separated by spaces.
For instance:
mkdir dir1 dir2 dir3
To create a directory and its parent directories if they do not
exist, use the -p option:
mkdir -p
Music/{Jazz/Blues,Folk,Disco,Rock/{Gothic,Punk,Progressiv
e},Classical/Baroque/Early}
To set specific permissions for the new directory, use the -
m option followed by the desired permissions. For example,
to create a directory with 700 permissions, which means
only the user who created the directory can access it, you
would run:
mkdir -m 700 newdir
When the -m option is not used, the newly created
directories usually have either 775 or 755 permissions,
depending on the umask value.
To display a message for each created directory, use the -
v option:
mkdir -v newdir

Remove directories(rmdir)
To remove directories in Linux, you can use
the rmdir and rm commands. The rmdir command is used to
remove empty directories only. If you need to remove a non-
empty directory, you should use the rm command with the -
r option, which stands for recursive deletion.15
The rmdir command syntax is simple and does not require
additional options for empty directories. However, if you
want to remove a directory and its parent directories if they
are empty, you can use the -p option.1
For the rm command, the -r option is used to remove
directories and their contents recursively. The -f option
forces the removal without prompting for confirmation, and
the -i option prompts for confirmation before each
removal.15
In Windows, the rmdir or rd command is used to remove
directories. To remove a directory tree, including
subdirectories and files, you can use the /s option.
The /q option specifies quiet mode, which does not prompt
for confirmation when deleting a directory tree.2
It's important to be cautious when using these commands,
as they permanently remove directories and files without
moving them to a trash or recycle bin.5
• rmdir: Removes empty directories only.
• rm: Removes files and directories, including non-empty
directories with the -r option.

Listing contents of directory(ls)


The ls command in Linux is used to list files and directories.
By default, when used without any options or arguments, it
lists the names of all files in the current working directory.34
To list the contents of a specific directory, you can pass the
directory path as an argument to the ls command.3 For
example, to list the contents of the /etc directory, you would
type ls /etc.
The -l option lists files in a long format, providing detailed
information such as file type, permissions, owner, group,
size, and modification date.13
The -a option lists all files, including hidden files (those
starting with a dot).1
Using -al or -la lists all files in long format.1
To sort the output by different criteria such as extension,
size, or time, you can use the --sort option. For example, to
sort by modification time in reverse order, you would use ls -
lt.3
To list only directories, you can use ls -d */.

Very brief idea about important file


systems of UNIX
1. /bin – Essential User Binaries
• Basic Linux commands available to all users.
• Needed for booting and single-user mode.
• Examples: ls, cp, mv, rm.

2. /usr/bin – Non-Essential User Binaries


• Applications and utilities for general use.
• Not required for system boot.
• Examples: python, gcc, vim.

3. /sbin – System Binaries


• System administration tools for root.
• Used for booting, repairing, or configuring the system.
• Examples: fsck, ifconfig, reboot.

4. /usr/sbin – Non-Essential System Binaries


• Additional system tools and daemons for
administration.
• Typically used by superuser.
• Examples: sshd, apache2, useradd.

5. /etc – Configuration Files


• System-wide config files and startup scripts.
• No binaries stored here.
• Examples: /etc/passwd, /etc/hosts, /etc/fstab.

6. /dev – Device Files


• Represents hardware and virtual devices as files.
• Examples: /dev/sda (hard disk), /dev/null.

7. /lib – Shared Libraries


• Essential libraries for binaries in /bin and /sbin.
• Examples: libc.so, kernel modules.

8. /usr/lib – Application Libraries


• Libraries used by programs in /usr/bin and /usr/sbin.
• Not needed for boot.
9. /usr/include – Header Files
• C/C++ programming header files.
• Used for compiling software.
• Examples: stdio.h, stdlib.h.

10. /usr/share/man – Manual Pages


• Documentation files (man pages).
• Accessed via man command.
• Organized into sections (man1, man5, etc).

11. /tmp – Temporary Files


• Temporary storage for programs.
• Often cleared on reboot.
• Writable by all users.

12. /var – Variable Data


• Stores log files, mail, print spool, temp data.
• Examples: /var/log, /var/mail.

13. /home – User Home Directories


• Personal directories for each user.
• Stores user files, configs, downloads.
• Example: /home/alice, /home/bob.

MODULE-3 ORDINARY FILE


HANDLING
Displaying and creating files (cat)

In UNIX, the cat (short for concatenate) command is used


to display, create, and combine files. Here's how it works
for each of those tasks:

1. Displaying a File
To display the contents of a file in the terminal:
cat filename.txt

This shows the content of filename.txt on the screen.

2. Creating a File
You can use cat to create a new file:
cat > newfile.txt
Then type the content you want to include. Press CTRL + D
to save and exit.
Example:
cat > hello.txt
This is a new file.
CTRL + D

3. Appending to a File
To append new content to an existing file:
cat >> existingfile.txt
Type your new content, and press CTRL + D when done.

4. Concatenating Multiple Files


To combine several files into one:
cat file1.txt file2.txt > combined.txt
This merges file1.txt and file2.txt into combined.txt.

Tips
• Use cat -n filename.txt to display line numbers.
• Use cat with more or less for long files: cat
filename.txt | less.

Copying a file (cp)


In UNIX, the cp command is used to copy files and
directories. Here's how to use it:

Basic Syntax
cp [options] source destination

Examples
1. Copy a File
cp file1.txt file2.txt
This copies file1.txt to a new file named file2.txt. If file2.txt
already exists, it will be overwritten without warning
(unless options are used).

2. Copy a File to a Directory


cp file.txt /path/to/directory/
This copies file.txt into the specified directory.

3. Copy a Directory (Recursively)


cp -r dir1/ dir2/
This copies the entire contents of dir1 into dir2. If dir2
doesn’t exist, it will be created.

4. Prompt Before Overwriting


cp -i file1.txt file2.txt
The -i (interactive) option prompts you before overwriting
file2.txt.

5. Copy with Verbose Output


cp -v file1.txt file2.txt
The -v option shows what is being copied.

Common Options

Option Description

-i Prompt before overwrite

-r Copy directories recursively

-v Verbose (show files being copied)

-u Copy only if source is newer


Deleting a file(rm)
In UNIX, the rm command is used to delete (remove) files
and directories.

Basic Syntax
rm [options] filename

Examples
1. Delete a File
rm file.txt
This removes file.txt permanently — no recycle bin!

2. Delete Multiple Files


rm file1.txt file2.txt file3.txt
Deletes all the listed files.

3. Prompt Before Deleting


rm -i file.txt
Asks for confirmation before deleting.

4. Force Delete Without Prompting


rm -f file.txt
Deletes the file without confirmation, even if it’s write-
protected.

5. Delete a Directory and Its Contents


rm -r myfolder/
Deletes the directory myfolder and everything inside it
recursively.

6. Force Delete a Directory


rm -rf myfolder/
Deletes a directory and its contents forcefully and
recursively — use with caution!

Be Careful!
• rm does not move files to trash — once deleted,
recovery is difficult.
• Always double-check file names, especially with -f or
-r.
Renaming / moving a file(mv)
In UNIX, the mv command is used to rename or move files
and directories.

Basic Syntax
mv [options] source destination

Examples
1. Rename a File
mv oldname.txt newname.txt
This renames oldname.txt to newname.txt.

2. Move a File to a Directory


mv file.txt /path/to/directory/
Moves file.txt into the specified directory.

3. Rename or Move a Directory


mv olddir newdir
Renames the directory olddir to newdir.

4. Prompt Before Overwriting


mv -i file1.txt file2.txt
Prompts before overwriting file2.txt if it exists.

5. Verbose Output
mv -v file1.txt file2.txt
Shows what’s being moved or renamed.

Common Options
Option Description

-i Prompt before overwriting

-f Force move, no prompt

-v Verbose (show what's happening)

Notes:
• If the destination is a file that already exists, mv will
overwrite it without warning, unless you use -i.
• mv can both move and rename in one step.

Paging output (more)


In UNIX, the more command is used to page through long
text output one screen at a time, making it easier to read
large files or long command results.

Basic Syntax
more filename
This displays the contents of filename page-by-page.

Examples
1. Read a Long File
more longfile.txt
Shows the file one screen at a time. Press:
• Space → Next page
• Enter → Next line
• q → Quit
• /pattern → Search forward for a pattern

2. Use with Other Commands


cat bigfile.txt | more
OR
ls -l /etc | more
Useful when output is too long to fit on one screen.

Navigation Keys in more

Key Action

Space Next screen

Enter Next line

b Back one screen (if supported)

/text Search forward for “text”

q Quit

Comparison Tip
• more is older and simpler.
• less is more powerful (can scroll up/down freely).
Example using less:
less filename

Printing a file(lp)
In UNIX, the lp command is used to send files to a printer
— it's part of the CUPS (Common UNIX Printing System)
used in many modern UNIX and Linux systems.
Basic Syntax
lp [options] filename

Examples
1. Print a File
lp document.txt
This sends document.txt to the default printer.

2. Print to a Specific Printer


lp -d printer_name document.txt
Replaces printer_name with the actual name of the printer
you want to use.

3. Set Number of Copies


lp -n 3 document.txt
Prints 3 copies of document.txt.

4. Print with a Custom Job Name


lp -t "My Job" document.txt
Sets the print job name to "My Job".
5. Print a Directory Listing
ls -l | lp
Prints the result of the ls -l command.

Common lp Options

Option Description

-d Specify printer name

-n Number of copies

-t Job title

-o Set options (e.g., media size)

-h Connect to a remote print server

Check Printer Queue


lpq
Shows the current print jobs.

Cancel a Print Job


cancel job_id
You can find job_id using lpq.
Knowing file type(file)
In UNIX, the file command is used to determine the type
of a file — whether it's a text file, binary, image, script,
directory, etc.

Basic Syntax
file filename

Examples
1. Check Type of a File
file notes.txt
Output might be:
notes.txt: ASCII text

2. Check Type of Multiple Files


file *.txt
This checks all .txt files in the directory.

3. Example Output for Various Files


file script.sh
# Output: script.sh: Bourne-Again shell script, ASCII text
executable
file image.jpg
# Output: image.jpg: JPEG image data
file data.bin
# Output: data.bin: data

Why Use file?


• It doesn’t rely on file extensions — it examines the file
content.
• Helpful when dealing with unknown files or
downloads.

Line
In UNIX file handling, a "line" refers to a sequence of
characters ending with a newline character (\n), and it is a
fundamental unit when reading, writing, or processing
files. Here's how lines are handled in various file
operations:

Line-Based File Handling in UNIX


1. Reading a File Line by Line
In Shell Scripts:
while read line; do
echo "Line: $line"
done < file.txt
• Reads each line of file.txt.
• $line holds the current line content.

2. Counting Lines in a File


wc -l file.txt
• Outputs the number of lines in file.txt.

3. Viewing Specific Lines


Show line 10:
sed -n '10p' file.txt
Show line 5 using awk:
awk 'NR==5' file.txt
Show first N lines:
head -n 5 file.txt
Show last N lines:
tail -n 5 file.txt

4. Writing Lines to a File


Using echo:
echo "This is a new line" >> file.txt
• Appends a line to the file.

Using a heredoc (multiple lines):


cat << EOF >> file.txt
Line 1
Line 2
EOF

5. Removing Empty Lines


grep -v '^$' file.txt
Or use:
sed '/^$/d' file.txt

6. Finding Lines Matching a Pattern


grep "pattern" file.txt
Summary of Key Commands for Line-Based File
Handling:
Task Command Example

Read line by line while read line; do ... done < file

Count lines wc -l file.txt

Show specific line sed -n '5p' file.txt

Append line to file echo "text" >> file.txt

Filter by pattern grep "pattern" file.txt

Word and character counting(wc)


In UNIX, the wc (word count) command is used to count
lines, words, characters, and bytes in a file or input.

Basic Syntax
wc [options] filename

Common Examples
1. Count Lines, Words, and Characters
wc filename.txt
Example output:
10 25 120 filename.txt
This means:
• 10 → Lines
• 25 → Words
• 120 → Characters (or bytes)
• filename.txt → File name

2. Count Only Lines


wc -l filename.txt
3. Count Only Words
wc -w filename.txt
4. Count Only Characters
wc -m filename.txt
5. Count Bytes (instead of characters)
wc -c filename.txt
(Useful for binary files where byte count ≠ character count)

6. Count from Command Output


echo "Hello world" | wc -w
Output:
2

Summary of wc Options

Option Meaning

-l Count lines

-w Count words

-m Count characters

-c Count bytes

Would you like a shell script that uses wc to analyze


multiple files or summarize directory statistics?

Comparing files (cmp)


In UNIX, the cmp command is used to compare two files
byte by byte. It's a simple but powerful tool to check if two
files are identical or to find where they differ.

Basic Syntax
cmp [options] file1 file2

Common Examples
1. Compare Two Files
cmp file1.txt file2.txt
• If files are identical, there is no output.
• If files differ, cmp shows the byte and line number
of the first difference.
Example Output:
file1.txt file2.txt differ: byte 12, line 3

2. Silent Mode (Just Return Status)


cmp -s file1.txt file2.txt
• No output at all.
• Return status:
o 0: files are the same
o 1: files differ
o 2: an error occurred
This is useful in shell scripts.

3. Compare Binary Files


cmp image1.jpg image2.jpg
Works for binary files too — helpful when checking for
changes or corruption.
Exit Codes

Exit Code Meaning

0 Files are identical

1 Files differ

2 Error occurred

Related Commands

Command Purpose

diff Line-by-line differences (text)

cmp Byte-by-byte comparison

comm Compare sorted files line-by-line

Finding common between two files


(comm)
In UNIX, the comm command is used to find common
and differing lines between two sorted files. It
compares them line by line and outputs:
1. Lines only in the first file
2. Lines only in the second file
3. Lines common to both files
To find common lines between two files using the comm
command in UNIX, follow these steps:

Step-by-Step: Using comm to Find Common Lines

Step 1: Sort Both Files


The comm command requires sorted files.
sort file1.txt -o file1.txt
sort file2.txt -o file2.txt
This sorts the files in-place.

Step 2: Use comm with the -12 Option


comm -12 file1.txt file2.txt
This displays only the lines common to both files.

Example
file1.txt
apple
banana
cherry
file2.txt
banana
cherry
date
Run:
sort file1.txt -o file1.txt
sort file2.txt -o file2.txt
comm -12 file1.txt file2.txt

Output:
banana
cherry

Summary

Command Description

comm file1.txt file2.txt Compare two sorted files

comm -12 file1.txt file2.txt Show common lines only

comm -23 file1.txt file2.txt Lines only in file1

comm -13 file1.txt file2.txt Lines only in file2

Displaying file differences (diff)


In UNIX, the diff command is used to compare the
contents of two files line by line and display their
differences. It’s especially useful for text files such as
code, config files, or documents.

Basic Syntax
diff [options] file1 file2

Example Files
file1.txt:
apple
banana
cherry
file2.txt:
apple
banana
date
Run:
diff file1.txt file2.txt

Output:
3c3
< cherry
---
> date

Explanation:
• 3c3 → line 3 changed in both files
• < cherry → line in file1.txt
• > date → line in file2.txt

Common diff Options

Option Description

-y Side-by-side comparison

Context format (adds surrounding


-c
lines)

Unified format (compact, preferred


-u
for code diffs)

--suppress- Hide lines that are the same when


common-lines using -y

Side-by-Side Comparison
diff -y file1.txt file2.txt
Output:
apple apple
banana banana
cherry | date

Unified Format (Common for Patches)


diff -u file1.txt file2.txt
Output:
--- file1.txt
+++ file2.txt
@@ -1,3 +1,3 @@
apple
banana
-cherry
+date

Summary

Task Command

Basic comparison diff file1 file2

Unified diff (e.g., for Git) diff -u file1 file2

Side-by-side view diff -y file1 file2


Creating archive file(tar)
In UNIX, the tar command is used to create, view, and
extract archive files — usually with a .tar extension. It's
commonly used for backups, packaging, and
transferring files.

Basic Syntax
tar [options] archive_name.tar files/directories

Creating an Archive
tar -cvf archive.tar file1.txt file2.txt folder/
Breakdown:
• -c → Create a new archive
• -v → Verbose (list files being archived)
• -f → Use the specified archive file name
• archive.tar → Output archive file
• file1.txt, folder/ → Files/folders to include

Common Tar Operations


Task Command Example

Create archive tar -cvf files.tar file1 file2 dir/

Extract archive tar -xvf files.tar

List contents tar -tvf files.tar

Add file to archive (append) tar -rvf files.tar newfile.txt

Compress with gzip tar -czvf files.tar.gz folder/

Extract .tar.gz file tar -xzvf files.tar.gz

Examples

Create an archive of a folder:


tar -cvf backup.tar myfolder/

Create and compress with gzip:


tar -czvf backup.tar.gz myfolder/
Extract an archive:
tar -xvf backup.tar
Extract a .tar.gz file:
tar -xzvf backup.tar.gz

Tip: Tar Without Compression vs With Compression


Command
Format Description
Example

.tar Archive only tar -cvf file.tar dir/

tar -czvf file.tar.gz


.tar.gz Compressed with gzip
dir/

Compressed with bzip2 tar -cjvf file.tar.bz2


.tar.bz2
(smaller) dir/

Compress file(gzip)
In UNIX, the gzip command is used to compress files
using the GNU zip algorithm, reducing file size and saving
disk space. It creates files with a .gz extension.

Basic Syntax
gzip [options] filename

Examples

Compress a Single File


gzip myfile.txt
• This replaces myfile.txt with a compressed file:
myfile.txt.gz
Decompress a .gz File
gunzip myfile.txt.gz
• This restores the original myfile.txt

Keep Original File While Compressing


gzip -c myfile.txt > myfile.txt.gz
• -c outputs the compressed data to stdout.
• Original myfile.txt is preserved.

Compress Multiple Files


gzip file1.txt file2.txt
• Creates file1.txt.gz and file2.txt.gz

Compress a Tar Archive (Common Use Case)


tar -czvf archive.tar.gz folder/
• -z uses gzip for compression

View Contents of a .gz File Without Extracting


zcat myfile.txt.gz # Outputs file contents
zmore myfile.txt.gz # Scroll through contents
zgrep "text" myfile.txt.gz # Search inside compressed file

Summary of gzip Options

Option Description

-d Decompress (gzip -d = gunzip)

-k Keep original file (gzip -k)

-c Write to stdout (used with >)

-v Verbose — show compression details

Uncompress file(gunzip)
In UNIX, the gunzip command is used to uncompress
files that were compressed using the gzip command.
These files typically have a .gz extension.

Basic Syntax
gunzip [options] filename.gz

Example: Uncompress a File


gunzip myfile.txt.gz
This restores the original file:
• Removes myfile.txt.gz
• Creates myfile.txt

If You Want to Keep the .gz File:


Use gzip -d with -c to keep the compressed file:
gzip -dc myfile.txt.gz > myfile.txt
• -d: decompress
• -c: write output to stdout

Uncompress Multiple Files


gunzip file1.gz file2.gz

Extracts both and removes the original .gz files.

View Compressed File Without Uncompressing

Command Description

zcat Display contents of .gz file

zmore Scroll through a .gz file

zgrep Search within a .gz file


Summary

Command Action

gunzip file.gz Uncompress and remove .gz

gzip -d file.gz Same as gunzip

gzip -dc file.gz > file Uncompress but keep .gz file

Archive file (zip)


In UNIX, the zip command is used to archive and
compress files into a .zip file format — which is widely
used and compatible with many systems, including
Windows.

Basic Syntax
zip [options] archive_name.zip files_or_directories

Examples

Create a ZIP Archive


zip myarchive.zip file1.txt file2.txt
• Creates myarchive.zip containing file1.txt and
file2.txt.
Create a ZIP Archive of a Folder
zip -r myfolder.zip myfolder/
• -r means recursive — includes all files and
subfolders in myfolder/.

Add a File to an Existing ZIP Archive


zip myarchive.zip newfile.txt

Compress Without Paths (Store only file names)


cd myfolder/
zip ../myarchive.zip *
• Archives contents without storing folder paths.

Viewing ZIP File Contents


unzip -l myarchive.zip
• Lists the files inside the ZIP archive.

Extracting a ZIP Archive


unzip myarchive.zip
Summary of Useful Options

Option Description

-r Include directories recursively

-q Quiet mode (no output)

-v Verbose (detailed info about compression)

-e Create encrypted (password-protected) ZIP

Example: Password-Protected ZIP


zip -e secret.zip confidential.txt
• Prompts you to enter a password.

Extract compress file(unzip)


In UNIX, the unzip command is used to extract files from a
.zip archive.

Basic Syntax
unzip archive.zip
• Extracts all files from archive.zip into the current
directory.
Common Examples

Extract ZIP Archive


unzip myarchive.zip
• Extracts contents into the current folder.

Extract to a Specific Directory


unzip myarchive.zip -d /path/to/destination/
• Extracts files into the specified directory.

List Contents Without Extracting


unzip -l myarchive.zip
• Shows the files inside the archive.

Overwrite Without Prompting


unzip -o myarchive.zip
• Automatically overwrites existing files.

Extract Specific Files


unzip myarchive.zip file1.txt file2.txt
• Extracts only file1.txt and file2.txt from the archive.
Summary of Useful Options

Option Description

-l List contents of the zip file

-d Specify extraction directory

-o Overwrite files without prompting

-q Quiet mode (minimal output)

Brief idea About effect of cp,rm and mv


command on directory

1. cp (Copy) and Directories


• By default, cp does not copy directories unless you
use the recursive option.
• To copy a directory and its contents, use:
cp -r source_directory target_directory
• The -r (or -R) option copies the directory and all
files/subdirectories inside it recursively.
• Without -r, attempting to copy a directory will give an
error like:
cp: -r not specified; omitting directory 'dir'
2. rm (Remove) and Directories
• By default, rm does NOT remove directories.
• To delete a directory, use the recursive option:
rm -r directory_name
• The -r option removes the directory and all its
contents (files and subdirectories) recursively.
• To force removal without prompts, add -f:
rm -rf directory_name
• Be very careful with rm -rf — it deletes everything
without confirmation.

3. mv (Move/Rename) and Directories


• mv can move or rename directories just like files.
• Moving a directory to another location:
mv old_directory_path new_directory_path
• This moves the entire directory and its contents.
• If the target is an existing directory, the source
directory is moved inside it.
Summary Table

Command Directory Action Notes

Requires -r to copy Without -r, will error


cp
directory recursively out

Requires -r to remove -rf for forced


rm
directory recursively recursive deletion

Moves or renames Moves entire


mv directories without extra directory and
flags contents

MODULE-4 FILE ATTRIBUTES


File and directory attributes listing and
very brief idea about the attributes
Each file has characteristics like file name, file type, date
(on which file was created), etc. These characteristics are
referred to as 'File Attributes'. The operating system
associates these attributes with files. In different
operating systems files may have different attributes.
Some people call attributes metadata also.
Following are some common file attributes:
1. Name: File name is the name given to the file. A name
is usually a string of characters.
2. Identifier: Identifier is a unique number for a file. It
identifies files within the file system. It is not readable
to us, unlike file names.
3. Type: Type is another attribute of a file which
specifies the type of file such as archive file (.zip),
source code file (.c, .java), .docx file, .txt file, etc.
4. Location: Specifies the location of the file on the
device (The directory path). This attribute is a pointer
to a device.
5. Size: Specifies the current size of the file (in Kb, Mb,
Gb, etc.) and possibly the maximum allowed size of
the file.
6. Protection: Specifies information about Access
control (Permissions about Who can read, edit, write,
and execute the file.) It provides security to sensitive
and private information.
7. Time, date, and user identification: This information
tells us about the date and time on which the file was
created, last modified, created and modified by
which user, etc.
OS File Attributes
Some Other Attributes May Include:
Attributes related to flags. These Flags control or
enable some specific property:
1. Read-only flag: 0 for read/write; 1 for read-only.
2. Hidden flag: 0 for normal; 1 for do not display in
listings of all files.
3. System flag: 0 for normal files; 1 for system files.
4. Archive flag: 0 for has been backed up; 1 for needs to
be backed up.
5. ASCII/binary flag: 0 for ASCII file; 1 for binary file.
6. Random access flag: 0 for sequential access only; 1
for random access.
7. Temporary flag: 0 for normal; 1 for deleted file on
process exit.
8. Lock flags: 0 for unlocked; nonzero for locked.
Attribute related to keys. These are present in files
which can be accessed using key:
1. Record length: Number of bytes in a record.
2. Key position: Offset of the key within each record.
3. Key length: Number of bytes in the key field.
Some file systems also support extended file attributes,
such as character encoding of the file and security
features such as a file checksum.
All above attributes are not present in all files. Files may
posses different attributes as per the requirement. The
attributes also varies from system to system. Attributes
are also stored in secondary storage (File name and
identifier are stored in directory structure. Identifier in turn
locates other attributes). Attributes are important because
they provide that extra information about the files which
can be useful.

File ownership
In UNIX, every file and directory has an owner and a
group. File ownership is a key part of the UNIX
permissions system, which helps control who can read,
write, or execute files.
File Ownership Components
Each file/directory is associated with:
1. User (Owner) — The person who created the file
2. Group — A set of users with shared access
3. Other — Everyone else on the system

View File Ownership


Use the ls -l command:
ls -l file.txt
Example Output:
-rw-r--r-- 1 alice staff 2048 May 15 14:22 file.txt
• alice → Owner
• staff → Group
• Permissions: Owner can read/write, group and others
can read

Changing Ownership

Change File Owner


sudo chown newuser file.txt
• Changes owner of file.txt to newuser
Change Owner and Group
sudo chown newuser:newgroup file.txt
• Changes owner to newuser and group to newgroup

Changing Group Only


sudo chgrp newgroup file.txt
• Changes only the group ownership

Recursive Ownership Change (for Directories)


sudo chown -R user:group myfolder/
• Applies changes to the folder and all files inside it

Why Ownership Matters


• Controls access rights (read/write/execute)
• Allows administrators to delegate permissions
• Supports multi-user system security
Summary Table

Command Purpose

ls -l View file ownership

chown user file Change file owner

chown user:group Change owner and group

chgrp group file Change group ownership

File permissions
Here’s a summary of file ownership and permissions in
UNIX:

File Ownership
Each file in UNIX has:
• Owner: Creator of the file
• Group: Users with shared access
• Others: Everyone else

File Permissions
Permissions define who can do what with a file or
directory:
• Read (r): View contents
• Write (w): Modify or delete
• Execute (x): Run (for files) / Enter (for directories)
Permissions are shown with ls -l:
-rwxr-xr-- 1 owner group ...
• 1st character: Type (- for file, d for directory)
• Next 9 characters: Permissions for owner, group, and
others (in sets of 3)

Changing Permissions – chmod


Symbolic Mode:
• + → Add permission
• - → Remove permission
• = → Set exact permission
Examples:
chmod u+x file # Add execute for owner
chmod g=rx file # Set group to read and execute
chmod o-wx file # Remove write/execute from others
Absolute (Octal) Mode:
Permissions represented numerically:
Permission Value Symbol

None 0 ---

Execute 1 --x

Write 2 -w-

Read 4 r--
Examples:
chmod 755 file # rwx for owner, rx for group and others
chmod 644 file # rw for owner, r for group and others

Changing Ownership
chown – Change file owner:
chown user file
chgrp – Change file group:
chgrp group file
• Only root can change ownership of others' files.
• Users can change group ownership if they belong to
that group.

Directory Permissions
• Read: List files
• Write: Add or delete files
• Execute: Enter the directory (cd) or access its
contents

Changing file permissions–relative


permission & absolute permission
File Permissions in UNIX – chmod Command
The chmod (change mode) command is used to change
permissions on files and directories.
Only the owner or superuser (root) can change
permissions.

UNIX Permission Basics


Each file/directory has 3 types of users:
• User (u) – Owner
• Group (g) – Assigned group
• Others (o) – Everyone else
And 3 types of permissions:
• Read (r) – View contents
• Write (w) – Modify contents
• Execute (x) – Run file / access directory
Command to view permissions:
ls -l filename
Example output:
-rwxr-xr-- 1 user group 1234 date filename

1. Changing Permissions Using Absolute Mode


Format:
chmod nnn filename
Where nnn are octal values for:
• Owner
• Group
• Others
Octal Permission Table:
Octal Symbol Description

0 --- No permissions

1 --x Execute only

2 -w- Write only

3 -wx Write and execute

4 r-- Read only

5 r-x Read and execute


Octal Symbol Description

6 rw- Read and write

7 rwx Read, write, execute

Examples:
chmod 755 file # rwx for user, r-x for group & others
chmod 644 file # rw- for user, r-- for group & others
chmod 700 script # rwx only for owner

2. Setting Special Permissions in Absolute Mode


Special permissions use a four-digit octal value (nnnn),
where the first digit represents:
Octal Special Permission

1 Sticky bit (t)

2 setgid (s)

4 setuid (s)

Examples:
chmod 4555 file # setuid + rwx for user, r-x for
group/others
chmod 2551 file # setgid + rx for user/others, x for group
chmod 1777 directory # sticky bit + full access for all
3. Changing Permissions Using Symbolic Mode
Format:
chmod [who][operator][permissions] filename
Symbols:
Who:
• u: user
• g: group
• o: others
• a: all (user + group + others)
Operators:
• +: add permission
• -: remove permission
• =: set exact permission
Permissions:
• r: read
• w: write
• x: execute
• s: setuid/setgid
• t: sticky bit
Examples:
chmod o-r file # Remove read from others
chmod a+rx file # Add read & execute for all
chmod g=rwx file # Set group to rwx exactly

Notes:
• Use ls -l to verify changes.
• With ACLs (Access Control Lists), changes may affect
ACL masks. Use getfacl filename to verify
permissions if ACLs are used.

Changing File Ownership


In UNIX, every file or directory has:
• An owner (user)
• A group (group of users)
Only the file owner or superuser (root) can change the
ownership.

Why Change Ownership?


• To transfer control of a file to another user.
• To change which group has access to a file.
1. chown – Change File Owner
Syntax:
chown [new_owner] filename
Example:
chown alice report.txt
Changes the owner of report.txt to user alice.

2. chgrp – Change Group Ownership


Syntax:
chgrp [new_group] filename
Example:
chgrp developers report.txt

Changes the group of report.txt to developers.

3. Change Owner and Group Together


Syntax:
chown new_owner:new_group filename
Example:
chown alice:developers report.txt

Sets owner to alice and group to developers.


Important Notes:
• Only the superuser (root) can change file owner.
• Owners can change the group only to one they belong
to.
• Use ls -l to verify changes.

View Ownership:
ls -l filename
Example Output:
-rw-r--r-- 1 alice developers 1234 May 15 10:00 report.txt

Changing Group Ownership


Each file in UNIX has:
• An owner (a specific user)
• A group (a set of users who share access)
Sometimes, you need to change the group associated with
a file or directory—this is done using the chgrp command.

chgrp – Change Group Ownership

Syntax:
chgrp [new_group] filename

Example:
chgrp developers project.doc

This changes the group of project.doc to developers.

Change group for multiple files:


chgrp staff file1 file2 dir1

Change group recursively (for directories and


contents):
chgrp -R editors documents/

All files and subdirectories inside documents/ will now


belong to group editors.

Check group ownership:


Use ls -l to see group ownership:
ls -l project.doc
Sample Output:
-rw-r--r-- 1 alice developers 2048 May 15 09:30
project.doc
Important Points:
• You must be either:
o The file owner, and a member of the new group.
o Or the superuser (root).
• If you're not part of the target group, you can’t assign
it.

Here's a well-organized and easy-to-understand note on


File System and Inodes in UNIX – great for study or
technical documentation.

File System and Inodes


What is a File System?
A file system in UNIX is a method used by the operating
system to store, organize, and manage data on a disk. It
defines:
• How files are named, stored, and retrieved.
• How permissions and ownerships are handled.
• The structure (directories, files, links, etc.).
UNIX uses a hierarchical file system, starting from the
root /.
Key Components of UNIX File System:
1. Files – Contain data (text, binary, etc.).
2. Directories – Special files that contain references to
other files/directories.
3. Mount Points – Where additional file systems are
attached to the main tree.
4. Links – Hard or symbolic references to files.

Inodes – The Building Blocks of UNIX File System

What is an Inode?
An inode (index node) is a data structure that stores
metadata about a file (not the file content or name).
Each file and directory has an associated inode.

Inode Contains:
• File type (regular, directory, link)
• Permissions (rwx)
• Owner UID
• Group GID
• Size of the file
• Timestamps (created, modified, accessed)
• Link count (how many names point to this file)
• Pointers to disk blocks (where file data is stored)

View inode information:


ls -i filename
Example:
ls -i myfile.txt
387136 myfile.txt
Here, 387136 is the inode number.

View detailed inode metadata:


stat filename

How UNIX Handles Files:


• File name is stored in the directory entry.
• File data and metadata are stored in the inode.
• This means multiple file names (hard links) can point
to the same inode.

Inode Notes:
• Each file has only one inode, but many filenames
(hard links) can point to it.
• If you delete a filename, the file isn't removed until all
hard links (and open file descriptors) are gone.
• Inodes are finite. If you run out of inodes, you cannot
create new files—even if disk space is available.

Check inode usage on disk:


df -i

Summary

Term Description

File System Organizes files and directories on disk

Inode Data structure with metadata for a file

Link Filename pointing to an inode

ls -i Shows inode number

stat Shows full inode info

Hard Link
What is a Hard Link?
A hard link is a direct reference to the inode of a file.
Multiple hard links can point to the same inode, meaning
they all share the same data and metadata (except the file
name).

Key Characteristics of Hard Links:


• A hard link is indistinguishable from the original file.
• All hard links share the same inode number.
• Changes made to one link are reflected in all others.
• Deleting one link does not delete the data unless all
links are removed.
• Hard links cannot span different file systems.
• Cannot be created for directories (to prevent loops in
the file system).

How to Create a Hard Link:


ln original_file hard_link_name
Example:
ln file1.txt file1_hardlink.txt
This creates a hard link file1_hardlink.txt that points to the
same inode as file1.txt.
Verify with ls -i:
ls -i file1.txt file1_hardlink.txt
Both will show the same inode number, indicating they
point to the same data.

View Link Count:


ls -l file1.txt
The second column shows the link count (number of hard
links to the file's inode).

Summary Table

Feature Hard Link

Links to Inode

Shared inode Yes

Affected by deletion Only when all links are deleted

Cross-file system link No

Works on directories No (except by root in special cases)

Command to create ln source target


Soft link (Symbolic Link)

What is a Soft Link?


A soft link (also called a symbolic link) is a shortcut or
pointer to another file. It contains the pathname to the
original file, not the data itself.

Key Characteristics of Soft Links:


• Soft links have their own inode and metadata.
• They point to the file name (path), not the actual
data.
• If the original file is deleted or moved, the soft link
becomes broken (dangling).
• Can link across different file systems and
partitions.
• Can be created for directories.
• Indicated by an l at the beginning of the ls -l output
and show the path they point to.

How to Create a Soft Link:


ln -s original_file soft_link_name
Example:
ln -s file1.txt file1_symlink.txt
This creates file1_symlink.txt, which points to file1.txt.

Verify with ls -l:


ls -l file1_symlink.txt
Output:
lrwxrwxrwx 1 user group size date file1_symlink.txt ->
file1.txt
Note the l at the beginning and the arrow -> indicating it’s a
soft link.

Summary Table

Feature Soft Link

Links to File name (path)

Shared inode No

Affected by deletion Breaks if original file is deleted

Cross-file system link Yes

Works on directories Yes

Command to create ln -s source target


Comparison: Hard Link vs. Soft Link in UNIX

Soft Link
Feature Hard Link
(Symbolic Link)

File's inode (actual


Points to File name (path)
data)

Yes (same inode as No (has its own


Inode shared
original file) inode)

File system Must be on the Can span across


limitation same file system file systems

Affects if File still accessible Link is broken


original deleted via hard link (dangling link)

Can link to
Usually not allowed Allowed
directories

Command to ln original_file ln -s original_file


create hardlink_name softlink_name

Uses minimal extra


Stores path as text;
Storage space (no
very small size
duplication)
Soft Link
Feature Hard Link
(Symbolic Link)

Starts with l and


Identification in Appears like a
shows -> pointing
ls -l regular file
path

Significance of File Attributes for


Directories in UNIX
In UNIX, directories are special files that store information
about other files and directories. The file attributes
(permissions and types) for directories control how users
can interact with the directory and its contents.

Key Attributes and Their Significance for


Directories:
Meaning for Effect / Permission
Attribute
Directory Behavior

Allows listing the User can view the list of


Read (r) names of files inside files and subdirectories in
the directory it

User can add or remove


Write (w) Allows creating,
deleting, or files and subdirectories
Meaning for Effect / Permission
Attribute
Directory Behavior

renaming files inside


the directory

Allows access to the


User can enter (cd) into
Execute contents and
the directory and access
(x) traversal through the
files if permissions allow
directory

Summary:
• Read permission lets you see filenames inside the
directory (e.g., ls command).
• Write permission lets you modify the directory
contents (create/delete files).
• Execute permission lets you access the files or
subdirectories inside the directory and traverse it.

Example:
• If a user has read but not execute permission on a
directory, they can see the file names but cannot
access or open them.
• If a user has execute but not read permission, they
cannot list the files but can access files if they know
the exact filename.

Why this matters?


Directory permissions control security and accessibility
of the files and subdirectories within. Proper permission
settings prevent unauthorized users from accessing or
modifying directory contents.

Default Permissions of Files and


Directories and Using umask
1. Default Permissions
• When you create a new file or directory, UNIX
assigns it default permissions.
• These defaults depend on the system and are
modified by the umask value.
File Type Default Permissions (before umask)

File 666 (rw-rw-rw-)


File Type Default Permissions (before umask)

Directory 777 (rwxrwxrwx)


Note: Files generally do not have execute permission by
default.

2. What is umask?
• umask (user file creation mask) determines which
permission bits will be disabled (masked off) when a
new file or directory is created.
• It subtracts permissions from the system defaults.

3. How umask Works


• The default permission minus the umask value gives
the final permission.
Formula:
Final Permission = Default Permission - umask

4. Example of umask
Final
umask Effect on Final File Effect on
Directory
Value Files Permission Directories
Permission

removes removes
write for 644 (rw-r--r- write for 755 (rwxr-xr-
022
group and -) group and x)
others others

removes
removes
write for 664 (rw-rw- 775
002 write for
others r--) (rwxrwxr-x)
others only
only

removes
removes all
all for 600 (rw------ 700 (rwx-----
077 for group
group and -) -)
and others
others

5. How to Check and Set umask


• Check current umask:
• umask
• Set umask (for example, set to 022):
• umask 022

6. Summary
File Directory
Aspect Effect of umask
Default Default

System Masks out


default 666 777 permission bits from
perms defaults

Typical Removes write


022 022
umask permission for group
(common) (common)
value and others

Listing Modification and Access Time


1. File Timestamps in UNIX
Every file in UNIX has three important timestamps:
Timestamp Description

Modification Time
Last time file content was modified
(mtime)

Access Time
Last time file was read or accessed
(atime)

Change Time Last time file metadata


(ctime) (permissions, ownership) changed

2. Viewing File Modification and Access Time


Use the ls command with options to display timestamps:
• Show modification time (default):
• ls -l filename
Displays modification time (mtime).
• Show access time:
• ls -lu filename
• Show change time:
• ls -lc filename

3. Detailed Timestamp Listing with stat


For a detailed view of all timestamps:
stat filename
Output example:
Access: 2025-05-15 10:30:00
Modify: 2025-05-14 09:00:00
Change: 2025-05-15 09:45:00

4. Summary
Command Shows

ls -l Modification time (mtime)

ls -lu Access time (atime)


Command Shows

ls -lc Change time (ctime)

stat Detailed timestamps

Changing File Timestamps (touch)


1. What is touch?
• The touch command is used to change the access
and modification times of a file.
• If the file does not exist, touch creates an empty file
with the current time as its timestamp.

2. Basic Usage
• Update access and modification times to the current
time:
• touch filename

3. Set Specific Time and Date


• Using -t option with [[CC]YY]MMDDhhmm[.ss]
format:
• touch -t 202505151230.45 filename
Sets timestamp to May 15, 2025, 12:30:45.
4. Change Only Access or Modification Time
• Change only access time:
• touch -a filename
• Change only modification time:
• touch -m filename

5. Use Reference File’s Timestamp


• Set timestamp of a file to match another file:
• touch -r referencefile filename

6. Summary of Useful touch Options


Option Description

touch file Update access and modification to now

touch -a file Update access time only

touch -m file Update modification time only

touch -t time file Set specific timestamp

touch -r ref file Use timestamp from reference file

File Locating with find Command


1. What is find?
• The find command searches for files and directories
in a directory hierarchy based on various criteria.
• It is very powerful and flexible for locating files by
name, type, size, modification time, permissions, and
more.

2. Basic Syntax
find [path] [options] [expression]
• path: Directory where search begins (default is
current directory .).
• options and expression: Criteria to match
files/directories.

3. Common Examples
• Find files by name:
• find /home/user -name "file.txt"
• Find files by case-insensitive name:
• find /home/user -iname "File.TXT"
• Find all directories named docs:
• find / -type d -name "docs"
• Find all regular files with .log extension:
• find /var/log -type f -name "*.log"
• Find files modified in last 7 days:
• find /home/user -mtime -7
• Find files larger than 10MB:
• find / -size +10M
• Find files with specific permissions:
• find / -perm 644

4. Actions with find


You can also perform actions on found files using -exec or
-delete.
• Delete all .tmp files:
• find /tmp -name "*.tmp" -delete
• Change permissions of found files:
• find /var/www -type f -name "*.php" -exec chmod 644
{} \;

5. Summary of Common Options


Option Description

-name Search by filename (case-sensitive)

-iname Search by filename (case-insensitive)


Option Description

-type f/d Search for files/directories

-mtime n Modified time in days

-size +n File size greater than n

-perm mode Match permissions

-exec command Execute command on each found file

MODULE-5 SHELL
INTERPRETIVE CYCLE OF SHELL
In UNIX, the shell is an interpreter that processes
commands entered by the user. The interpretive cycle
refers to the process the shell follows to read, interpret,
and execute commands.

Interpretive Cycle of the UNIX Shell


Here is a step-by-step breakdown of the cycle:
1. Prompt and Input
o The shell displays a prompt (e.g., $, #, or a
custom prompt).
o The user types a command and hits Enter.
2. Read
o The shell reads the entire input line as a string.
3. Parse
o The shell breaks the input into tokens
(commands, arguments, operators like |, >, etc.).
o It recognizes special characters (wildcards *,
variables $VAR, pipes |, etc.).
4. Interpret
o It interprets built-in syntax (like variable
substitution, command substitution with
backticks or $(...), and redirections).
o It identifies whether the command is:
▪ A built-in command
▪ An alias
▪ A function
▪ An external executable
5. Execute
o The shell executes the command.
o For pipelines (|), it sets up the necessary
input/output redirection.
o For background tasks (&), it forks a subprocess.
6. Wait for Completion
o If it's a foreground task, the shell waits until the
command finishes.
o If it's a background task, it returns control to the
user immediately.
7. Return Output
o The shell displays output or error messages from
the command.
o It sets the exit status ($?) to reflect success or
failure.
8. Repeat
o The shell shows the prompt again and waits for
the next command.

Summary
The interpretive cycle of the UNIX shell can be
summarized as:
Prompt → Read → Parse → Interpret → Execute → Wait →
Output → Repeat
This cycle continues as long as the user is interacting with
the shell.

TYPES OF SHELL
In UNIX and UNIX-like operating systems, a shell is a
command-line interpreter that provides a user interface
for accessing the system's services. There are several
types of shells, each with its own features, syntax, and
scripting capabilities.

Types of Shell in UNIX


Here are the main types of shells commonly used:
Shell
Command Description
Name

The original UNIX shell developed by


Bourne
sh Stephen Bourne. Simple and fast,
Shell
often used in scripting.

Developed by Bill Joy. Syntax similar


C Shell csh to C programming. Supports
aliases, job control.

Developed by David Korn.


Korn Combines features of sh and csh
ksh
Shell with additional scripting
capabilities.

Default shell in many Linux


Bourne systems. Enhanced version of sh
Again bash with more features like command
Shell history, tab completion, and
arithmetic.
Shell
Command Description
Name

Powerful shell with advanced


features like themes, plugins, and
Z Shell zsh
extended globbing. Popular among
power users.

Friendly interactive shell with a


Fish focus on usability. Modern features
fish
Shell like autosuggestions and syntax
highlighting.

Shell Comparison Overview

Feature sh bash csh ksh zsh fish

Scripting

Command History

Auto-completion

Syntax Highlighting

Modern Features Medium Low Medium High High


Pattern Matching
In the UNIX shell (like bash, sh, zsh), pattern matching is
called globbing. It is used mainly for filename expansion
and string matching in shell scripts.

1. Filename Globbing (Wildcard Matching)


The shell automatically expands these patterns when
dealing with files and directories.
Pattern Matches

* Zero or more characters

? Exactly one character

[abc] One character: a, b, or c

[a-z] Any lowercase letter

[!abc] or [^abc] Any character except a, b, or c

Example:
ls *.txt # All files ending in .txt
ls file?.sh # file1.sh, fileA.sh, but not file10.sh
ls [a-c]* # Files starting with a, b, or c

2. Pattern Matching in Shell Scripts (case and [[ ]])


case statement
Pattern matching is commonly used in case statements
for decision making.
filename="report.pdf"

case "$filename" in
*.pdf) echo "PDF file";;
*.txt) echo "Text file";;
*) echo "Unknown file type";;
esac
[[ ]] conditional with pattern
You can use pattern matching with [[ and ==:
file="data.csv"
if [[ $file == *.csv ]]; then
echo "It's a CSV file"
fi
Use =~ for regex matching:
if [[ "hello123" =~ ^hello[0-9]+$ ]]; then
echo "Pattern matched!"
fi
3. Shell Options (For Advanced Globbing)
You can enable extended globbing in bash with:
shopt -s extglob
Then use:
ls !(file1.txt) # Match all except file1.txt
ls +([0-9]) # Match one or more digits

Summary

Feature Syntax Use

Wildcard matching *, ?, [] File operations

Scripting match case, [[ ]] Conditional execution

Regex match [[ =~ ]] Pattern match with regex

Escaping in UNIX Shell


Escaping in the UNIX shell is the technique of preventing
the shell from interpreting special characters. This
allows you to treat symbols like *, $, |, >, etc., as literal
characters instead of commands or operators.

Why Escaping Is Needed


The shell interprets special characters for:
• Wildcards: *, ?
• Variables: $VAR
• Command substitution: $(...) or `...`
• Redirection and pipes: >, <, |
• Quoting: ', ", \
To use these characters as-is, you escape them.

1. Backslash \ – Escape a Single Character


The backslash tells the shell to treat the next character
literally.
echo \$HOME # Prints: $HOME instead of expanding it
echo "File name is file\*.txt" # Prints: file*.txt

2. Single Quotes ' – Strong Quoting


Everything inside single quotes is taken literally. No
variable or command substitution happens.
echo '$HOME' # Prints: $HOME
echo 'Today is `date`' # Prints: Today is `date`

3. Double Quotes " – Weak Quoting


Double quotes allow variable and command
substitution, but protect most other characters.
name="Alice"
echo "Hello $name" # Hello Alice
echo "File is *.txt" # File is *.txt

4. Escape Sequences in Shell


Common examples:
echo "Line 1\nLine 2" # Prints as one line (because \n
is not interpreted here)
echo -e "Line 1\nLine 2" # With -e, it interprets escape
sequences

Practical Examples
Escaping a wildcard:
touch file\*name.txt # Creates a file named file*name.txt
Escaping a dollar sign:
echo "This is \$100" # Prints: This is $100
Escaping within commands:
grep "1\.0" file.txt # Match "1.0" literally (dot is a special
character in regex)
Summary

Substitution
Method Description
Allowed?

\ Escapes the next character No

Strong quoting – everything


'...' No
literal

Weak quoting – protects


"..." Yes
most, but allows $, `

Quoting
Quoting in the UNIX shell is used to control how the shell
interprets special characters like spaces, $, *, |, and
more. It helps avoid unintended behavior in commands
and scripts.

Why Use Quoting?


Without quoting, the shell evaluates special characters.
Quoting prevents or controls this, especially when dealing
with:
• Filenames with spaces
• Variables containing special characters
• Command substitution
• Wildcards (*, ?), redirection (>, <), pipes (|)

1. Single Quotes '...' – Strong Quoting


• Everything inside is taken literally
• No variable expansion or command substitution
Example:
echo '$HOME' # Output: $HOME
echo 'Today is `date`' # Output: Today is `date`
Use when you want the shell to ignore all special
characters.

2. Double Quotes "..." – Weak Quoting


• Allows: variable expansion ($VAR), command
substitution (`cmd` or $(...))
• Protects: spaces, wildcards, most special characters
Example:
name="Alice"
echo "Hello $name" # Output: Hello Alice
echo "Today is $(date)" # Output: Today is Fri May 16 ...
Use when you want some interpretation, but also need to
protect spaces or wildcards.

3. Backslash \ – Escape One Character


• Escapes a single character
• Often used inside double quotes

Example:
echo \$HOME # Output: $HOME
echo "She said \"hello\"" # Output: She said "hello"

4. No Quotes – Full Interpretation


If you don’t quote at all, the shell will interpret everything:
• Spaces break arguments
• Wildcards expand
• Variables are expanded

Example:
file="My File.txt"
rm $file # Tries to remove `My` and `File.txt` as two files
(error!)
rm "$file" # Correct: quotes prevent word splitting
Quick Comparison Table

Interprets $, Preserves
Type Use case
*, etc.? spaces?

'...' No Yes Literal strings

Most scripting
"..." Partially Yes
needs

(for one (for one Escape just one


\char
char) char) character

Simple
No
Yes No commands, risky
quotes
input

Example in Script
#!/bin/bash
name="John Doe"
echo "Hello $name" # Correct
echo Hello $name # Will treat "John" and "Doe" as
separate arguments

Redirection in UNIX Shell


Redirection in the UNIX shell allows you to control where
input comes from and where output goes. Instead of
reading input from the keyboard and writing output to the
terminal, you can redirect them to or from files and other
commands.

Types of Redirection

1. Standard Output Redirection (>)


Sends the output of a command to a file, replacing its
contents.
ls > file_list.txt
# Output of `ls` is written to file_list.txt
2. Append Output (>>)
Adds output to the end of a file, preserving existing
contents.
echo "New entry" >> log.txt

3. Standard Input Redirection (<)


Takes input from a file instead of the keyboard.
sort < names.txt
# Sorts lines in names.txt
4. Standard Error Redirection (2>)
Redirects error messages (stderr) to a file.
ls nonexistentfile 2> errors.txt

5. Redirect Both Output and Errors


• Combine stdout and stderr into one file:
command > output.txt 2>&1
• Or in Bash (modern shells):
command &> output.txt

6. Here Document (<<)


Feeds multi-line input directly into a command.
cat << EOF
This is line 1
This is line 2
EOF

7. Here String (<<<)


Feeds a string as input to a command.
grep "hello" <<< "hello world"
File Descriptors in UNIX

FD Name Symbol Description

0 Standard Input < Input from keyboard or file

1 Standard Output > Normal output

2 Standard Error 2> Error messages

Summary Table

Operator Purpose Example

> Redirect stdout ls > out.txt

>> Append stdout echo hi >> out.txt

< Redirect stdin sort < data.txt

2> Redirect stderr ls nofile 2> error.txt

Merge stderr with command > all.txt


2>&1
stdout 2>&1

&> Redirect both (bash) command &> all.txt

See cat << EOF


<< Here Document
example

<<< Here String grep "foo" <<< "foobar"


Standard Input (stdin) in UNIX Shell
In the UNIX shell, standard input (stdin) refers to the
default source of input for a command — typically, it's
the keyboard.
• It is represented by file descriptor 0.
• You can redirect stdin from a file or provide input
directly.

1. Using Standard Input by Default


Many UNIX commands read from stdin if no file is given.
cat
# Then type something. Press Ctrl+D to signal end of input.

2. Redirecting stdin from a File (<)


You can feed input to a command from a file instead of
typing it manually.
sort < names.txt
# Sorts the contents of names.txt

3. Piping stdin (|)


Pipes send stdout of one command as stdin to another.
cat names.txt | sort
# Same as: sort < names.txt

4. Here Document (<<)


Feed multi-line input to stdin of a command directly in a
script or terminal.
cat << EOF
Line 1
Line 2
EOF

5. Here String (<<<)


Feed a single-line string into a command via stdin.
grep "root" <<< "root:x:0:0:root:/root:/bin/bash"

File Descriptor Reference

Name Descriptor Purpose

stdin 0 Input (keyboard or file)

stdout 1 Output (terminal)

stderr 2 Errors
Example: Script Using stdin
#!/bin/bash
echo "Enter your name:"
read name # reads from stdin
echo "Hello, $name"

Summary

Method Description Example

Keyboard Default stdin read var

< file Redirect file to stdin sort < input.txt

` ` Pipe stdout → stdin

<< Here Document cat << EOF

<<< Here String grep "x" <<< "example text"

Standard Output (stdout) in UNIX Shell


In UNIX shell, standard output (stdout) is the default
destination for output from commands — typically the
terminal screen.
• It is represented by file descriptor 1.
• You can redirect stdout to a file or pipe it to another
command.

1. Standard Output by Default


Commands print to the terminal unless redirected.
echo "Hello, World"
# Output goes to terminal via stdout

2. Redirecting stdout to a File (>)


Use > to send output to a file, replacing any existing
content.
ls > filelist.txt
# Writes output of `ls` to filelist.txt

3. Appending stdout to a File (>>)


Use >> to append output to an existing file without
overwriting.
echo "New log entry" >> logs.txt

4. Piping stdout (|)


Send stdout of one command to another command’s
stdin.
ls | sort
# Passes output of `ls` to `sort`

5. Redirecting stdout with File Descriptor


Explicitly use 1> for stdout redirection (not always
required, but allowed).
echo "Test" 1> output.txt
You can also combine with stderr:
command > out.txt 2>&1 # stdout and stderr to same file

File Descriptor Reference

Name Descriptor Description

stdin 0 Standard input (keyboard)

stdout 1 Standard output (terminal)

stderr 2 Standard error (terminal)

Example Script Using stdout


#!/bin/bash
echo "Generating file list..."
ls > filelist.txt
echo "Done. Output saved to filelist.txt"

Summary

Operation Symbol Description Example

Output to
(default) Normal stdout echo "hi"
terminal

Overwrite file
Redirect to file > echo hi > out.txt
with output

Add output to echo hi >>


Append to file >>
end of file out.txt

Pipe to stdout to next


` `
another cmd cmd’s stdin

Explicit echo test 1>


1> stdout to file
descriptor out.txt

Standard Error (stderr) in UNIX Shell


Standard error (stderr) is the default destination for error
messages and diagnostics. Like stdout, it usually prints to
the terminal, but separately from normal output.
• It is represented by file descriptor 2.
• Keeping stderr separate allows you to handle errors
differently from normal output.

1. Standard Error by Default


Errors print to the terminal independently of normal
output.
ls nonexistentfile
# Error message appears on terminal (stderr)

2. Redirect stderr to a File (2>)


You can redirect error messages to a file to review them
later.
ls nofile 2> errors.txt
# Error messages go to errors.txt instead of terminal

3. Redirect stderr to stdout (2>&1)


Combine error messages and normal output into the
same place (file or pipe).
command > alloutput.txt 2>&1
# Redirect stdout and stderr to alloutput.txt
Or with Bash shortcut:
command &> alloutput.txt

4. Redirect stderr to /dev/null (ignore errors)


Discard error messages by redirecting to the null device.
command 2> /dev/null

File Descriptor Reference

Name Descriptor Description

stdin 0 Standard input

stdout 1 Standard output

stderr 2 Standard error (errors)

Example: Separate stdout and stderr


ls existingfile nofile > output.txt 2> errors.txt
# Output of existingfile goes to output.txt
# Error for nofile goes to errors.txt

Summary Table
Operator Description Example

2> Redirect stderr to a file ls nofile 2> errors.txt

command > all.txt


2>&1 Redirect stderr to stdout
2>&1

Redirect both stdout &


&> command &> all.txt
stderr

2> command 2>


Ignore error messages
/dev/null /dev/null

/dev/null — The "Black Hole" of UNIX


• It’s a special device file that discards all data written
to it.
• Any output redirected to /dev/null vanishes and does
not consume disk space.
• Reading from /dev/null returns EOF (end-of-file)
immediately.
Use cases:
• Suppress output or errors you don’t want to see.
• Silence commands that produce unnecessary
output.
Examples:
# Discard stdout
command > /dev/null

# Discard stderr
command 2> /dev/null

# Discard both stdout and stderr


command &> /dev/null

/dev/tty — The Controlling Terminal


• Represents the current user’s terminal device (your
keyboard & screen).
• Useful when you want a command to read input from
or write output to the terminal directly, regardless
of redirections.
Use cases:
• When input/output has been redirected but you want
to interact with the user.
• Reading passwords or prompts securely.
Examples:
# Write directly to terminal even if stdout is redirected
echo "Hello User" > /dev/tty
# Read input from terminal even if stdin is redirected
read -p "Enter password: " password < /dev/tty

Summary

Device Purpose Behavior Common Use

Discards Suppress
Data is lost
/dev/null everything unwanted
(black hole)
written to it output

Connects to Prompt user


Current user
/dev/tty user’s during
terminal device
terminal redirected I/O

Pipe (|) in UNIX Shell


A pipe is a powerful feature in the UNIX shell that allows
you to connect the output of one command directly as
the input to another command.

What is a Pipe?
• The pipe symbol: |
• It takes the standard output (stdout) of the
command on the left and passes it as standard
input (stdin) to the command on the right.
• Enables building command chains for complex
tasks.

Basic Example
ls -l | grep ".txt"
• ls -l lists files.
• grep ".txt" filters output to show only .txt files.

More Complex Example


ps aux | grep firefox | awk '{print $2}'
• ps aux: lists running processes.
• grep firefox: filters lines containing "firefox".
• awk '{print $2}': prints the second field (usually the
PID).

Why Use Pipes?


• Combines simple tools to perform complex
operations.
• Avoids creating temporary files.
• Streamlines data processing.

Multiple Pipes
You can chain many commands:
cat file.txt | tr 'a-z' 'A-Z' | sort | uniq -c | sort -nr
• Converts text to uppercase, sorts, counts unique
lines, and sorts by frequency.

Pipes and Exit Status


• Each command runs independently.
• Exit status $? reflects the last command in the pipe
chain.
• To check status of all commands, use PIPESTATUS
array in bash.

Summary
Symbol Description Example

Pipe stdout of one command to


` `
stdin of next
tee Command in UNIX Shell
The tee command reads from standard input and writes
it both to standard output and to one or more files
simultaneously.

What Does tee Do?


• Takes input from a command or pipe.
• Sends that input to a file (or files) AND to the
terminal (stdout).
• Useful when you want to save output to a file but
still see it on the screen.

Basic Syntax
command | tee filename

Examples
Write output to a file and display on terminal
ls -l | tee filelist.txt
• Lists files.
• Shows output on screen.
• Saves output to filelist.txt.
Append output to a file instead of overwriting
echo "New line" | tee -a logfile.txt
• -a flag appends instead of replacing.

Write output to multiple files


some_command | tee file1.txt file2.txt

Common Use Cases


• Logging output while monitoring it live.
• Debugging scripts or commands.
• Saving output for later analysis without losing
interactive display.

Summary
Option Description Example

None Write stdout to file and screen `cmd

-a Append instead of overwrite `cmd

Multiple files Write to multiple files `cmd


Command Substitution in UNIX Shell
Command substitution allows you to run a command
inside another command, and use the output of the first
command as input or part of the second command.

Purpose
• Capture the output of a command and store it in a
variable or use it directly.
• Helps automate scripts by dynamically generating
values.

Syntax
Two common ways to do command substitution:
1. Using backticks `command` (older style)
2. Using $(command) (preferred modern style)

Examples
Using backticks
today=`date`
echo "Today's date is: $today"
Using $()
files_count=$(ls | wc -l)
echo "Number of files: $files_count"

Why prefer $() over backticks?


• Easier to nest commands without escaping.
• More readable, especially in complex scripts.

Nested command substitution example


result=$(echo "Today is $(date +%A)")
echo "$result"

Use in commands
echo "Home directory contains $(ls ~ | wc -l) items."

Summary
Syntax Description Example

`command` Older command substitution today=`date`

$(command) Modern, preferred syntax `files=$(ls


Shell Variables
Shell variables store data or values that can be used and
manipulated within shell scripts or command line
sessions.

Types of Shell Variables


1. User-defined variables — created by users or
scripts.
2. Environment variables — exported to child
processes.
3. Special variables — predefined by the shell (e.g., $?,
$#).

How to Define Variables


variable_name=value
• No spaces around =.
• Values can be strings, numbers, etc.
Example:
name="Alice"
age=25
Accessing Variables
Use $ before variable name to get its value.
echo "Name is $name"
echo "Age is $age"

Exporting Variables
To make variables available to child processes, use
export:
export PATH=$PATH:/my/custom/path

Special Variables

Variable Meaning

$? Exit status of last command

$$ PID of current shell process

$# Number of arguments passed

$0 Name of the shell/script

$1, $2, ... Positional parameters (arguments to scripts)

Example Script Using Variables


#!/bin/bash
greeting="Hello"
name="Bob"
echo "$greeting, $name!"

Tips
• Use quotes around variable values with spaces:
var="Hello World"
• Use {} for clarity: ${var}

Summary

Action Syntax Example

Assign variable var=value name="Alice"

Access variable $var or ${var} echo $name

Export variable export var export PATH

Special vars $?, $$, $#, $1 echo $?

Basic Idea About UNIX Processes


A process in UNIX is an instance of a running program.
When you run a command or start an application, the
system creates a process to execute it.
What is a Process?
• A process is an active program with its own:
o Process ID (PID)
o Memory space
o Program counter (current execution point)
o File descriptors (input/output)
o State (running, sleeping, stopped, etc.)

Process Lifecycle
1. Created — when a program starts (using fork() and
exec() system calls).
2. Running — executing instructions on the CPU.
3. Waiting — waiting for resources or input/output.
4. Terminated — finished execution or killed.

Process States

State Description

Running Currently executing on CPU

Sleeping Waiting for an event or resource

Stopped Suspended or paused


State Description

Finished but still has an entry in process table


Zombie
(waiting for parent to acknowledge)

Process Identification
• PID (Process ID): Unique identifier for each process.
• PPID (Parent Process ID): The PID of the process that
started (created) this process.

Viewing Processes
• Use ps to see current processes.
• Use top or htop for dynamic process monitoring.
• Use kill to send signals to processes.
Example:
ps aux | grep firefox

Process Control
• Foreground process: Runs interactively; shell waits
until it finishes.
• Background process: Runs concurrently; shell
returns prompt immediately (run command with &).
sleep 30 & # Run sleep in background

Signals
• Processes can receive signals to perform actions:
o SIGTERM (terminate)
o SIGKILL (force kill)
o SIGSTOP (pause)
o SIGCONT (continue)

Summary

Concept Description

Process Running instance of a program

PID Unique process identifier

PPID Parent process ID

Foreground Runs with shell waiting

Background Runs asynchronously

Signals Control commands to processes


Displaying Process Attributes Using ps
Command
The ps command is used to view information about
running processes on your system. It shows details
(attributes) like process ID, user, CPU usage, memory
usage, command name, etc.

Basic Usage
ps
Shows processes running in the current shell session.

Common Options to Display More Info

Option Description Example

aux Shows all processes with detailed info ps aux

-ef Full-format listing of all processes ps -ef

-u user Processes for specific user ps -u alice

-p PID Information for a specific process ps -p 1234

Important Process Attributes Shown by ps


Column Description

PID Process ID

PPID Parent Process ID

USER Owner of the process

%CPU CPU usage percentage

%MEM Memory usage percentage

VSZ Virtual memory size (KB)

RSS Resident Set Size (physical mem)

STAT Process state/status

START Process start time

TIME CPU time used

COMMAND Command that started the process

Example: Show All Processes with Details


ps aux
Sample output snippet:
USER PID %CPU %MEM VSZ RSS TTY STAT START
TIME COMMAND
root 1 0.0 0.1 169112 5800 ? Ss 08:00 0:01
/sbin/init
alice 1234 2.3 1.2 345678 24500 pts/0 S+ 09:45 0:15
bash

Filtering with grep


ps aux | grep ssh
Lists processes related to ssh.

Customize Output Columns (ps with -o)


You can specify which columns to display:
ps -eo pid,user,%cpu,%mem,comm

Summary

Command Description

ps Show current shell processes

ps aux Show all processes with details

ps -ef Full-format all processes

ps -p PID Show info for specific process

ps -eo cols Customize displayed columns


Displaying System Processes
System processes are usually started by the system or
root user and run in the background to manage system
tasks.

How to Display System Processes


1. Using ps Command
To list all system processes, you can use:
ps -ef
• Shows all processes with detailed info.
• Look for processes owned by root or system users.

2. Using ps aux
ps aux
• Shows all processes.
• The USER column shows process owner; system
processes usually run as root or other system users.

3. Filter System Processes


If you want to see only system processes (owned by
root):
ps -U root -u root u
• Lists processes owned by the root user.

4. Using top or htop


Run interactive tools to monitor all processes dynamically:
top
or if installed,
htop
System processes are shown along with user processes,
sorted by CPU/memory use.

5. Check for system daemons


You can filter system daemons (services) by common
names or look under /etc/init.d or systemd units.

Sample command showing system processes


ps -eo pid,user,comm | grep '^ *[0-9]* root'

Summary:
Command Description

ps -ef Show all processes


Command Description

ps aux Show all processes with detailed info

ps -U root -u root u Show processes owned by root

top / htop Interactive process monitoring

Process Creation Cycle


When a new process is created in UNIX, it goes through
several stages involving key system calls. Here’s the basic
cycle:

1. Parent Process Calls fork()


• fork() creates a new child process by duplicating the
parent process.
• The child is an almost exact copy but has a unique
Process ID (PID).
• After fork(), two processes run concurrently: parent
and child.

2. Child Process Calls exec()


• The child process replaces its memory space with a
new program using exec() family of functions (e.g.,
execl, execvp).
• This loads and runs the new program in the child
process.
• If exec() fails, the child can handle the error or
terminate.

3. Parent Process Uses wait()


• Parent may use wait() or waitpid() to wait for the
child to finish.
• This helps parent collect the child’s exit status and
clean up system resources.

Process Creation Steps Summary:


Step Description

fork() Creates a new child process

exec() Child process loads and runs new program

wait() Parent waits for child process termination

Diagram of Process Creation Cycle:


Parent process
|
fork()
/ \
Parent Child
|
exec()
|
New program runs
|
exit
|
wait() by parent

Example in Shell Script


#!/bin/bash
echo "Parent PID: $$"
sleep 10 &
child_pid=$!
echo "Child PID: $child_pid"
wait $child_pid
echo "Child finished"
• Here, shell creates a background process (sleep 10
&).
• The script waits for it to complete with wait.

Shell Creation Steps


1. init Process
o The very first process started by the kernel during
boot (PID = 1).
o Responsible for initializing the system and
starting system services.
2. getty Process
o init launches getty on terminals (virtual consoles
or serial ports).
o getty manages the login prompt — it opens the
terminal, sets modes, and waits for user input.
3. login Process
o After user enters username at getty prompt, login
authenticates the user by verifying the password.
o On successful authentication, login sets up the
user environment (home directory, shell, etc.).
4. User’s Shell
o Finally, login starts the user’s shell (e.g., bash,
sh, ksh).
o The shell provides the command prompt,
accepting and executing user commands.

Flow Diagram:
Kernel boot

init (PID 1)

getty (terminal login prompt)

login (user authentication)

User’s Shell (command prompt)

Summary:
Step Role

init Starts system services and terminals

getty Presents login prompt on terminal

login Authenticates user and sets environment

Shell Provides user interface for commands


Process States in UNIX
Every process in UNIX is always in one of several states
that describe what it is currently doing or waiting for.

Common Process States


State
Full Name Description
Code

The process is actively running on


R Running
the CPU or ready to run.

The process is waiting (sleeping)


S Sleeping for some event or resource.
Usually interruptible sleep.

Uninterruptible Waiting for I/O (disk, network),


D
Sleep cannot be interrupted by signals.

The process has been stopped


T Stopped
(usually by a signal like SIGSTOP).

The process has terminated but


still has an entry in the process
Z Zombie
table because its parent hasn’t
acknowledged it yet.
State
Full Name Description
Code

Process is idle kernel thread (not


Idle (Linux
I always shown in all UNIX
specific)
variants).

How to See Process States?


Use ps or top commands and check the STAT or S column.
Example:
ps aux
You’ll see a STAT column like:
R Running
S Sleeping
Z Zombie
T Stopped

Additional Notes
• Processes can move between states, for example
from Running → Sleeping → Running.
• Zombie processes are usually harmless but indicate
that the parent process hasn't waited on the child.
• T (stopped) processes can be resumed with kill -
CONT PID.

Summary Table
Code Description Example Usage/Meaning

Process currently using CPU or


R Running or runnable
ready

Sleeping
S Waiting for input or resource
(interruptible)

Uninterruptible
D Waiting for I/O (disk/network)
sleep

Suspended by signal (e.g.,


T Stopped or traced
Ctrl+Z)

Defunct process, exited but


Z Zombie
not reaped

Zombie State
A zombie process is a process that has completed
execution but still has an entry in the process table. It’s
also called a defunct process.

What is a Zombie Process?


• When a child process finishes, it sends an exit status
to its parent.
• The child process remains in the process table as a
zombie until the parent reads its exit status using
wait() or waitpid().
• Zombies do not consume CPU or memory, but they
use a process ID (PID).
• If the parent never collects the exit status, zombies
accumulate and may exhaust system resources.

How Does a Zombie Appear?


• After process terminates:
o It becomes a zombie.
o It still shows up in ps output with status Z.
Example output:
$ ps aux | grep Z
user 12345 0.0 0.0 0 0? Z 12:34 0:00 [defunct]

How to Handle Zombies?


1. Parent should call wait() to clean up the zombie.
2. If the parent process dies, the init process (PID 1)
adopts the zombie and cleans it up.
3. To manually remove zombies, you can:
o Restart the parent process.
o Kill the parent process (if safe).

Why Zombies are Important?


• Zombies indicate that the parent process is not
properly cleaning up child processes.
• Accumulated zombies can exhaust the maximum
number of processes allowed.

Summary
Aspect Description

State Code Z (Zombie / Defunct)

Meaning Process terminated but not waited on

Consumes No CPU/memory, holds PID

Solution Parent calls wait() to reap

Background Jobs
Background jobs let you run commands without waiting
for them to finish, so you can keep using the shell prompt.
1. Using & Operator
• Add & at the end of a command to run it in the
background.
sleep 30 &
• Shell immediately returns prompt.
• The job runs independently.
• You can see jobs with jobs command.
• Bring job to foreground with fg %job_number.

2. Using nohup Command


• nohup (no hangup) runs a command immune to
hangup signals.
• This means the command keeps running even if you
close the terminal.
nohup long_running_command &
• Output by default goes to nohup.out.
• Useful for running jobs remotely or when logging out.

3. Difference Between & and nohup


Aspect & nohup

Job runs in
Yes Yes
background

Survives terminal
No (usually killed) Yes
logout

Terminal (unless nohup.out by


Output file
redirected) default

4. Example
# Run script in background
./backup.sh &

# Run script in background, immune to logout


nohup ./backup.sh &

5. Managing Background Jobs


• jobs — list background jobs.
• fg %1 — bring job 1 to foreground.
• bg %1 — resume stopped job 1 in background.
• kill %1 — terminate job 1.
Summary
Command Purpose

command & Run command in background

nohup command & Run command immune to hangup

jobs List background jobs

fg %job / bg %job Manage job state

Reduce Priority of a Process Using nice in UNIX


Shell
The nice command is used to start a process with a
modified scheduling priority (called “niceness”).
• A higher nice value means lower priority (the
process gets less CPU time).
• Default nice value is usually 0.
• Nice values range from -20 (highest priority) to +19
(lowest priority).

Using nice to Reduce Priority


To reduce priority (make process “nicer” to others),
increase the nice value:
nice -n 10 command
This runs command with niceness 10 (lower priority than
default 0).

Example
nice -n 15 tar -czf backup.tar.gz /large/directory
• Compressing runs with low CPU priority, so other
processes get preference.

Checking Niceness of a Running Process


Use ps with ni (nice) column:
ps -o pid,ni,cmd -p <PID>
Example:
ps -o pid,ni,cmd -p 1234

Changing Priority of a Running Process (renice)


To change priority of an existing process:
renice +10 -p 1234
This sets niceness of process 1234 to 10.

Summary:
Command Purpose

nice -n <value> Start command with given nice


command value

Change nice value of running


renice <value> -p <PID>
process

ps -o pid,ni,cmd View niceness of processes

Using Signals to Kill a Process in UNIX


Shell
In UNIX, signals are used to send asynchronous
notifications to processes. You can send signals to kill,
stop, or control processes.

Common Commands to Kill Processes Using Signals


1. kill Command
Sends a signal to a process by its PID.
kill <signal> <PID>
• Default signal is SIGTERM (signal 15) — politely asks
process to terminate.
• To force kill, use SIGKILL (signal 9).
Examples:
kill 1234 # Sends SIGTERM to process 1234
kill -9 1234 # Sends SIGKILL to force kill process 1234
kill -SIGSTOP 1234 # Stops (pauses) the process
kill -SIGCONT 1234 # Resumes a stopped process

2. killall Command
Kills processes by name instead of PID.
killall firefox
• Sends SIGTERM by default.
• Use -9 for force kill.

3. pkill Command
Similar to killall but more flexible (pattern matching).
pkill -9 sshd

Common Signals for Killing


Signal Number Description

SIGTERM 15 Graceful termination request

SIGKILL 9 Force kill (cannot be caught)

SIGSTOP 19 Stop process (pause)


Signal Number Description

SIGCONT 18 Continue a stopped process

How to Find Process IDs


ps aux | grep process_name
or
pidof process_name

Example
pid=$(pidof myprogram)
kill -15 $pid # politely ask to terminate
sleep 5
kill -9 $pid # force kill if still running

Summary
Command Usage

kill <PID> Send termination signal to process

kill -9 <PID> Force kill process

killall <name> Kill process by name

pkill <pattern> Kill processes matching pattern


Sending Jobs to Background and
Foreground
UNIX shells let you manage jobs by sending them to the
background or foreground.

1. Send a Job to Background (bg)


• Use bg to resume a stopped job in the background.
• Jobs usually get stopped by pressing Ctrl + Z.
Example:
# Start a job
sleep 100
# Press Ctrl+Z to stop (pause) it
[1]+ Stopped sleep 100
# Resume it in background
bg %1
[1]+ sleep 100 &

2. Bring a Job to Foreground (fg)


• Use fg to bring a background or stopped job to the
foreground so you can interact with it.
Example:
fg %1
• Brings job number 1 to foreground.

3. List Jobs
jobs
Shows jobs with job numbers and states.

4. Start Job Directly in Background


Add & at the end of the command:
sleep 100 &

Summary
Command Description

command & Start command in background

Ctrl + Z Stop (pause) running job

bg %job Resume stopped job in background

fg %job Bring background job to foreground

jobs List current jobs


Listing Jobs with jobs Command
The jobs command shows the list of jobs started in the
current shell session and their status.

What Does jobs Show?


• Job ID (number)
• Process ID (PID)
• Status (Running, Stopped, etc.)
• Command line

How to Use jobs


jobs
Example output:
[1]+ Running sleep 100 &
[2]- Stopped vim file.txt
• [1]+ is job number 1, currently running.
• [2]- is job number 2, currently stopped.
• + and - mark the current and previous job for fg and
bg.

Useful Options
Option Description

-l List jobs with PIDs

-p List only PIDs of the jobs


Example:
jobs -l
Shows something like:
[1]+ 2345 Running sleep 100 &
[2]- 2346 Stopped vim file.txt

Summary
Command Purpose

jobs List current shell jobs

jobs -l List jobs with PIDs

jobs -p List only job PIDs

Suspending a Job in UNIX Shell

How to Suspend a Running Job


• Press Ctrl + Z while the job is running in the
foreground.
• This sends the SIGTSTP signal to the process,
pausing (suspending) it.
• The job moves to a stopped state but remains in
memory.

What Happens After Suspension?


• The shell displays a message like:
[1]+ Stopped command_name
• The job can be resumed later with fg (foreground) or
bg (background).

Example Workflow
$ sleep 100
# Press Ctrl+Z
[1]+ Stopped sleep 100

$ jobs
[1]+ Stopped sleep 100

$ bg %1 # Resume in background
[1]+ sleep 100 &
$ fg %1 # Bring back to foreground
sleep 100

Summary
Command /
Action Effect
Key

Suspend job Ctrl + Z Stops the running job

List suspended Show all


jobs
jobs stopped/background jobs

Resume in bg Continue job in


background %job_number background

Resume in fg
Bring job to foreground
foreground %job_number

Killing a Job
You can kill a job that’s running or stopped in your current
shell session by sending it a termination signal.

Steps to Kill a Job


1. List Jobs to Identify Job Number
jobs
Example output:
[1]+ Running sleep 100 &
[2]- Stopped vim file.txt

2. Kill the Job Using kill with Job ID


Use the job ID prefixed with %:
kill %1
• This sends SIGTERM (signal 15) by default to the job
#1.
• You can force kill with SIGKILL:
kill -9 %1

3. Alternatively, Kill by PID


Find the PID using:
jobs -l
Output example:
[1]+ 2345 Running sleep 100 &
Then:
kill 2345
Summary Table
Command Description

jobs List current shell jobs

kill %job_number Kill job by job number

kill -9 %job_number Force kill job

jobs -l List jobs with PIDs

kill PID Kill job by PID

Executing Commands at a Specified Time in UNIX


Shell — at and batch

1. at Command
• Schedules a command or script to run once at a
specific time.
• The job runs when the system clock reaches the
specified time.
• Requires the at daemon (atd) running on the system.
Basic Usage
at 14:30
Then type commands you want to run at 2:30 PM, end
input with Ctrl+D.
Example:
at 22:00
at> /home/user/backup.sh
at> <Ctrl+D>

Check Scheduled Jobs


atq

Remove Scheduled Job


atrm <job_number>

2. batch Command
• Runs commands when system load is low (based on
loadavg).
• Useful for scheduling jobs to run during idle times
without specifying exact time.
Usage
batch
Then type the commands to run, and finish with Ctrl+D.

Summary
Command Purpose When It Runs

Schedule job at specific At the specified


at
time time

Schedule job when When system load


batch
system is idle is low

Example: Schedule Backup at 3 AM


echo "/home/user/backup.sh" | at 03:00
Or interactively:
at 03:00
at> /home/user/backup.sh
at> <Ctrl+D>

MODULE-6 Customization
Use of Environment Variables
Environment variables in UNIX are key-value pairs that
define settings and behavior for the shell and system
processes. They are inherited by child processes and used
to configure the working environment.

Common Uses of Environment Variables


Example
Use Case Description
Variable

USER, User name and home


Set user info
HOME directory

Configure shell PATH, Set search path and


behavior SHELL shell type

Define system LANG, Language and


settings TERM terminal type

Store session- PWD, Present/previous


specific info OLDPWD working directory

Pass settings to Custom Your own defined


programs/scripts vars variables

Viewing Environment Variables


printenv # Show all environment variables
echo $HOME # Show value of a specific variable
env # Also lists environment variables

Setting Environment Variables


1. Temporarily (Only in Current Session)
export MYVAR="hello"
echo $MYVAR
2. For a Single Command
MYVAR="value" command_name

3. Permanently (For All Sessions)


Add to shell configuration files like:
• ~/.bashrc (for interactive shells)
• ~/.bash_profile or ~/.profile (for login shells)
Example:
export EDITOR=nano
Then run:
source ~/.bashrc

Modifying Important Environment Variables


PATH – System path for command lookup:
echo $PATH
export PATH=$PATH:/custom/bin

Example Use in Script


#!/bin/bash
echo "Your home is $HOME"
echo "The current user is $USER"

Summary
Command Purpose

echo $VAR Show value of variable

export VAR=value Set variable (temporarily)

env, printenv List all environment variables

unset VAR Remove an environment variable

Common Environment Variables


Here’s a list of frequently used environment variables in
UNIX along with their meanings and usage:

Variable Description

The path to the current user's home


HOME
directory.Example: /home/alice

A colon-separated list of directories the shell


PATH searches for executables.Example:
/usr/local/bin:/usr/bin:/bin
Variable Description

The login name of the current user (same as


LOGNAME
$USER in most cases).

USER The username of the currently logged-in user.

Specifies the type of terminal to emulate (e.g.,


TERM
xterm, vt100).

Present Working Directory — shows the


PWD
current directory you’re in.

Primary shell prompt string — defines how


PS1 your command prompt looks. Default: usually
\u@\h:\w\$ (user@host:cwd$)

Secondary prompt — shown when a


PS2 command is incomplete (e.g., waiting for more
input).Default: >

Example Usage
echo $HOME # Displays your home directory
echo $PATH # Shows all paths where shell looks for
commands
echo $PWD # Shows your current working directory

Modifying a Variable
export PATH=$PATH:/custom/bin # Add /custom/bin to
your PATH
export PS1=">> " # Change shell prompt

Tip
To permanently change an environment variable, add the
export line to your shell config file:
• For bash: ~/.bashrc or ~/.bash_profile
• For zsh: ~/.zshrc

Aliases
Aliases in UNIX shell are shortcuts or custom names for
longer commands. They help you save time, reduce typing,
and avoid mistakes with frequently used or complex
commands.

Create an Alias
Temporary Alias (lasts for current shell session):
alias name='command'
Example:
alias ll='ls -l'
alias gs='git status'
alias rm='rm -i' # Prompts before deleting

Use the Alias


ll # Runs: ls -l
gs # Runs: git status

View All Defined Aliases


alias

Remove an Alias
unalias name
Example:
unalias ll

Make Alias Permanent


Add it to your shell’s config file:
• Bash: ~/.bashrc or ~/.bash_profile
• Zsh: ~/.zshrc
Example (~/.bashrc):
alias lla='ls -la'
alias c='clear'
Then run:
source ~/.bashrc

Notes
• Aliases do not accept arguments. For that, use a
function.
• You can view the actual command behind an alias:
type ll

Summary

Command Purpose

alias name='cmd' Create alias

unalias name Remove alias

alias List all aliases

type alias_name See what an alias maps to


Brief Idea of Command History
The command history feature in UNIX shells lets you
view, reuse, and manage previously executed
commands—saving time and effort.

View Command History


history
• Displays a numbered list of recent commands.
• Example:
• 101 ls -l
• 102 cd /var/log
• 103 grep "error" syslog

Reuse Past Commands

Syntax Action

!! Run the last command again

!n Run command number n from history

!-n Run the command that was n steps ago

!string Run the most recent command starting with string


Examples:
!! # Run the last command
!105 # Run command number 105
!ls # Run last command that started with 'ls'

Search History (Interactive)


• Reverse search: Press Ctrl + R and start typing
• (reverse-i-search)`ssh`: ssh user@server
• Press Enter to execute, or arrow keys to edit.

Configuration
Where Is History Stored?
• Usually stored in ~/.bash_history (for Bash) or
~/.zsh_history (for Zsh).
• You can view or edit this file directly.

Useful History Commands


history # View command history
history -c # Clear history (current session)
history -d N # Delete history entry number N

Customize History Behavior


Set in .bashrc or .bash_profile:
export HISTSIZE=1000 # Number of commands to
keep in memory
export HISTFILESIZE=2000 # Number of commands to
keep in .bash_history file

Summary

Command Description

history Show history of commands

!! Repeat last command

!n / !cmd Run specific past command

Ctrl + R Reverse search in history

history -c Clear the command history

Prepare File for Printing Using pr


Command
The pr command in UNIX is used to format text files for
printing. It adds features like headers, page breaks,
margins, columns, and more.

Basic Syntax
pr [options] filename

Common pr Options

Option Description

-n Add line numbers

-d Double-space the output

-t Remove headers and footers

-h "text" Set a custom header

-l N Set page length to N lines (default is 66)

-o N Set left margin offset to N spaces

-w N Set page width (for multicolumn output)

-COLUMN Format output into that many columns

Examples
1. Format file for printing:
pr myfile.txt
Adds headers with filename, date, and page numbers.

2. Format into 2 columns:


pr -2 myfile.txt
3. Add line numbers:
pr -n myfile.txt

4. Custom header and no footer:


pr -h "Monthly Report" -t myfile.txt

5. Format and send to printer:


pr myfile.txt | lpr

Summary
The pr command is a simple but powerful tool to:
• Prepare text files for printing
• Control page layout
• Format output into columns
• Add headers, margins, and spacing

Custom Display of File Using head and


tail
The head and tail commands are used to view specific
parts of a file — typically the beginning or end. They are
very useful for quickly checking logs, data, or config files.

head Command – Show First N Lines


Syntax:
head [options] filename
Common Usage:
Command Description

head filename.txt Shows first 10 lines (default)

head -n 5 file.txt Shows first 5 lines

head -c 20 file.txt Shows first 20 bytes/characters

tail Command – Show Last N Lines


Syntax:
tail [options] filename
Common Usage:
Command Description

tail file.txt Shows last 10 lines (default)

tail -n 15 file.txt Shows last 15 lines


Command Description

tail -c 30 file.txt Shows last 30 bytes/characters

Live Monitoring with tail -f


tail -f /var/log/syslog
• Continuously shows new lines as they’re added (real-
time monitoring).
• Useful for logs and debug output.

Combining head and tail for Custom Range


Example: View lines 11 to 20 of a file:
head -n 20 file.txt | tail -n 10
• head -n 20 gets the first 20 lines.
• tail -n 10 trims it to the last 10 of those = lines 11–20.

Summary

Command Example Output

head file.txt First 10 lines

tail file.txt Last 10 lines

tail -f file.txt Live output as file grows


Command Example Output

`head -n 50 tail -n 10`

Vertical Division of File Using cut


The cut command in UNIX is used to extract specific
columns (fields or characters) from each line of a file —
this is known as vertical division.

Syntax
cut [OPTION]... [FILE]

Common Options

Option Description

-b Select bytes

-c Select characters

-f Select fields (columns)

-d Set delimiter (used with -f)

Examples
1. Cut by Byte Position
cut -b 1-5 file.txt
→ Prints first 5 bytes from each line.

2. Cut by Character Position


cut -c 1-10 file.txt
→ Prints first 10 characters of each line.

3. Cut by Fields with Delimiter


cut -d ':' -f 1 /etc/passwd
→ Shows the first field (username) from the colon-
separated /etc/passwd file.

4. Cut Multiple Fields


cut -d ',' -f 1,3 file.csv
→ Extracts fields 1 and 3 from a CSV file.

5. Cut a Range of Fields


cut -d ':' -f 2-4 file.txt
→ Extracts fields 2 to 4 using : as the delimiter.

Summary
Use Case Command Example

First 5 bytes cut -b 1-5 file.txt

First 10 characters cut -c 1-10 file.txt

2nd column from CSV cut -d ',' -f 2 data.csv

Multiple fields from passwd cut -d ':' -f 1,3 /etc/passwd

Combine Files Horizontally Using paste


The paste command in UNIX is used to merge lines of
multiple files horizontally (i.e., side-by-side). This is
called horizontal concatenation of file contents.

Syntax
paste [OPTION]... [FILE]...

How It Works
Each line of the input files is joined together with a tab (by
default), producing one combined line per output line.

Examples
1. Basic Horizontal Merge
Given two files:
file1.txt
Alice
Bob
Charlie
file2.txt
Math
Science
English
paste file1.txt file2.txt
Output:
Alice
Mat
h
Bob
Scie
nce
Charlie
Engli
sh
2. Custom Delimiter
paste -d ',' file1.txt file2.txt
→ Uses , instead of a tab.
Output:
Alice,Math
Bob,Science
Charlie,English

3. Merge All Lines From One File Into One Line


paste -s file1.txt
-s (serial) merges all lines from a file into a single line
separated by tabs.
Output:
Alice Bob
Char
lie

4. Use Multiple Delimiters


paste -d ":|" file1.txt file2.txt
→ Cycles through : and | as delimiters between columns.
Summary

Option Description

paste f1 f2 Join files line-by-line with tabs

-d DELIM Use custom delimiter

-s Merge all lines of each file into one

Sort File Using sort


The sort command in UNIX is used to sort lines of text
files. You can sort alphabetically, numerically, by column,
and more.

Basic Syntax
sort [OPTIONS] filename

Common Examples
1. Alphabetical Sort
sort file.txt
→ Sorts lines in ascending (A–Z) order.

2. Reverse Sort
sort -r file.txt
→ Sorts lines in descending (Z–A) order.

3. Numeric Sort
sort -n numbers.txt
→ Sorts lines based on numeric value.

4. Sort by Specific Column (Field)


Given a file:
apple 20
banana 10
grape 15
sort -k2 file.txt
→ Sorts by the second field (column).

5. Sort by Column Numerically


sort -k2 -n file.txt
→ Sorts the second column numerically.

6. Remove Duplicates While Sorting


sort -u file.txt
→ Sorts and removes duplicate lines.

Useful Options

Option Description

-r Reverse order

-n Numeric sort

-k N Sort by Nth field/column

-u Unique lines only

-t Set custom delimiter (default is space)

-o Write result to output file

Example with Delimiter


If you have a CSV file:
sort -t ',' -k2 file.csv
→ Sorts based on the second column, using comma as
delimiter.

Summary
Task Command Example

Basic sort sort file.txt

Reverse sort sort -r file.txt

Numeric sort sort -n file.txt

Sort by column sort -k3 file.txt

Remove duplicates sort -u file.txt

Custom delimiter sort sort -t',' -k2 file.csv

Finding Repetition and Non-Repetition


Using uniq
The uniq command is used to filter out or identify
repeated lines in a sorted file or input. It detects
adjacent duplicate lines and can show unique or
repeated lines.

Important: uniq works on consecutive duplicate


lines
So, usually you run sort first to group duplicates together.

Basic Syntax
uniq [OPTIONS] [input_file] [output_file]
Common Uses and Options

Command Description

uniq file.txt Removes consecutive duplicate lines

sort file.txt |
Removes all duplicates after sorting
uniq

Prefixes lines with count of


uniq -c file.txt
occurrences

uniq -d file.txt Prints only duplicate lines

uniq -u file.txt Prints only unique (non-repeated) lines

Examples
Given file data.txt:
apple
apple
banana
banana
banana
cherry
apple
1. Remove consecutive duplicates:
uniq data.txt
Output:
apple
banana
cherry
apple

2. Remove all duplicates (sort first):


sort data.txt | uniq
Output:
apple
banana
cherry

3. Show counts of each line:


sort data.txt | uniq -c
Output:
3 apple
3 banana
1 cherry
4. Show only duplicates:
sort data.txt | uniq -d
Output:
apple
banana

5. Show only unique lines (no duplicates):


sort data.txt | uniq -u
Output:
cherry

Summary

Option Purpose

-c Count occurrences

-d Show only duplicate lines

-u Show only unique lines


Manipulating Characters Using tr
The tr command in UNIX is used to translate, delete, or
squeeze characters from standard input and write the
result to standard output. It’s a handy tool for simple
character-level transformations.

Basic Syntax
tr [OPTION] SET1 [SET2]
• Translates characters in SET1 to corresponding
characters in SET2.
• Reads from standard input, writes to standard
output.

Common Uses of tr

Command
Operation Description
Example

Translate
Convert all lowercase
lowercase to tr 'a-z' 'A-Z'
letters to uppercase
uppercase

Delete all digits from


Delete characters tr -d '0-9'
input
Command
Operation Description
Example

Replace multiple
Squeeze repeated
tr -s ' ' spaces with single
chars
space

Replace 'a' with '1', 'b'


Replace characters tr 'abc' '123'
with '2', 'c' with '3'

Delete all except


Complement
tr -cd 'a-z\n' lowercase letters and
(invert set)
newline

Examples
1. Convert lowercase to uppercase
echo "hello world" | tr 'a-z' 'A-Z'
Output:
HELLO WORLD

2. Delete digits from input


echo "abc123def456" | tr -d '0-9'
Output:
abcdef
3. Squeeze multiple spaces into one
echo "This is spaced" | tr -s ' '
Output:
This is spaced

4. Replace characters
echo "abcabc" | tr 'abc' '123'
Output:
123123

5. Remove all characters except letters and newline


echo "Hello 123! World" | tr -cd 'A-Za-z\n'
Output:
HelloWorld

Summary

Option Meaning

-d Delete characters

-s Squeeze repeated characters

-c Complement set (invert selection)


Searching Patterns Using grep
The grep command is a powerful tool used to search for
specific patterns or strings in files or input text. It prints
lines that match the given pattern.

Basic Syntax
grep [OPTIONS] PATTERN [FILE...]
• PATTERN can be a simple string or a regular
expression.
• If FILE is omitted, grep reads from standard input.

Common Examples
1. Search for a simple word in a file
grep "error" logfile.txt
→ Prints lines containing the word error.

2. Case-insensitive search
grep -i "warning" logfile.txt
→ Matches Warning, WARNING, etc.

3. Show line numbers with matching lines


grep -n "fail" logfile.txt
Output example:
23:Failed to start service
57:fail to connect

4. Search recursively in directories


grep -r "TODO" /path/to/code/
→ Search all files in directory and subdirectories.

5. Search for exact word match


grep -w "cat" file.txt
→ Matches only the whole word cat, not catalog.

6. Invert match (show lines NOT containing pattern)


grep -v "debug" logfile.txt
→ Prints all lines that do not contain "debug".

7. Count number of matching lines


grep -c "error" logfile.txt
→ Outputs the number of lines containing "error".
Useful Options Summary

Option Description

-i Case-insensitive search

-n Show line numbers

-r Recursive search

-v Invert match (exclude matches)

-w Match whole word

-c Count matches

-l List filenames with matches

Example: Search for "fail" in all .log files, ignore


case, and show line numbers
grep -i -n "fail" *.log

Basic Regular Expressions (BRE)


Basic Regular Expressions (BRE) are a way to describe
search patterns used by commands like grep, sed, and vi
in UNIX. They allow powerful pattern matching in text.

What is BRE?
• BRE defines a syntax for matching text patterns.
• Used by default in commands like grep.
• Allows matching characters, ranges, repetitions,
anchors, and grouping.

Common BRE Metacharacters and Usage


Symbol Meaning Example Matches

Any single
. a.b aab, acb, a1b
character

Zero or more of
* lo*l ll, lol, lool, loool
preceding char

Lines starting with


^ Start of line anchor ^Hello
Hello

Lines ending with


$ End of line anchor end$
end

Any vowel
[...] Character class [aeiou]
character

Negated character
[^...] [^0-9] Any non-digit
class

Repetition (m to n
\{m,n\} a\{2,4\} aa, aaa, or aaaa
times)
Symbol Meaning Example Matches

Escape special Matches a literal


\ \.
character dot .

Grouping (capture Zero or more


\(...\) \(ab\)*
group) repetitions of ab

Example Patterns
• Find lines containing "cat" or "cot":
grep 'c[ao]t' file.txt
• Find lines starting with "Error":
grep '^Error' file.txt
• Find lines ending with ".txt":
grep '\.txt$' file.txt
• Find lines with 3 to 5 "a"s in a row:
grep 'a\{3,5\}' file.txt

Notes
• In BRE, some metacharacters like {}, (), + need to be
escaped with backslash (\).
• For extended regex (ERE), these do not require
escaping (use grep -E or egrep).
Summary

Task BRE Pattern Example Command

Match any character . grep 'a.b' file

Zero or more repeats * grep 'lo*l' file

Start of line ^ grep '^Start' file

End of line $ grep 'end$' file

Character class [abc] grep '[aeiou]' file

Repetition counts \{m,n\} grep 'a\{2,4\}' file

egrep
egrep stands for “extended grep” and is used to search
files for patterns using Extended Regular Expressions
(ERE).

What is egrep?
• It’s like grep -E.
• Supports more powerful regex features without
needing to escape special characters like {}, +, ?, |, ()
that you must escape in Basic Regular Expressions
(BRE).
• Useful for complex pattern matching.

Basic Syntax
egrep [OPTIONS] PATTERN [FILE...]

Features of Extended Regular Expressions (ERE)

Metacharacter Meaning Example

Zero or one colou?r matches


?
occurrence color or colour

go+gle matches
One or more
+ gogle, google,
occurrences
gooogle

Between m and n a{2,4} matches aa,


{m,n}
occurrences aaa, or aaaa

` ` Alternation (OR)

(ab)+ matches ab,


() Grouping
abab, ababab

Examples
1. Search for lines containing “cat” or “dog”
egrep 'cat|dog' file.txt
2. Search for words with optional "u" (color or colour)
egrep 'colou?r' file.txt

3. Search lines with one or more 'o's in a row


egrep 'go+gle' file.txt

4. Grouping and repetition


egrep '(ab){2,3}' file.txt
Matches lines with abab or ababab.

Note
• Modern systems often alias egrep to grep -E.
• egrep is deprecated in some systems; prefer grep -E.

Summary

Task Command Example

Use alternation (OR) `egrep 'cat

Use optional character egrep 'colou?r' file

One or more repetitions egrep 'go+gle' file


Task Command Example

Grouping and counts egrep '(ab){2,3}' file

grep -E
grep -E is the command to use Extended Regular
Expressions (ERE) with grep. It behaves like egrep (which
is often an alias for grep -E).

What is grep -E?


• Enables extended regex syntax for pattern matching.
• Allows use of special characters like +, ?, {}, |, ()
without escaping.
• Preferred over egrep since egrep is sometimes
deprecated.

Basic Syntax
grep -E [OPTIONS] PATTERN [FILE...]

Examples Using Extended Regex


1. Match lines containing “cat” or “dog”
grep -E 'cat|dog' file.txt
2. Match “color” or “colour” (optional 'u')
grep -E 'colou?r' file.txt

3. Match one or more 'o's in a row


grep -E 'go+gle' file.txt

4. Match repeated groups


grep -E '(ab){2,3}' file.txt
Matches abab or ababab.

Summary of Extended Regex Metacharacters


Supported by grep -E
Symbol Meaning

? Zero or one occurrence

+ One or more occurrences

{m,n} Between m and n repetitions

` `

() Grouping
Why Use grep -E?
• More concise patterns without backslashes.
• Compatible with modern scripting.
• Same powerful matching as egrep.

MODULE-7 INTRODUCTION TO
SHELL SCRIPTS
Simple Shell Scripts
A shell script is a text file containing a sequence of shell
commands. These scripts are executed by the shell (like
bash, sh, etc.) and are used to automate repetitive tasks.

1. Hello World Script


#!/bin/bash
echo "Hello, World!"

Save it as hello.sh, then run:


chmod +x hello.sh
./hello.sh

2. Script to Add Two Numbers


#!/bin/bash
echo "Enter two numbers:"
read a b
sum=$((a + b))
echo "Sum is: $sum"

3. Check Even or Odd


#!/bin/bash
echo "Enter a number:"
read num
if [ $((num % 2)) -eq 0 ]; then
echo "Even"
else
echo "Odd"
fi

4. Loop to Print Numbers 1–5


#!/bin/bash
for i in {1..5}
do
echo "Number: $i"
done

5. Check If File Exists


#!/bin/bash
echo "Enter file name:"
read file
if [ -f "$file" ]; then
echo "File exists."
else
echo "File does not exist."
fi

6. Simple Menu Script


#!/bin/bash
echo "1. Show date"
echo "2. Show current directory"
echo "3. Show logged in users"
read -p "Choose an option: " choice

case $choice in
1) date ;;
2) pwd ;;
3) who ;;
*) echo "Invalid option" ;;
esac

Tips for Writing Shell Scripts

Tip Description

#!/bin/bash Shebang to use Bash shell

chmod +x script.sh Make the script executable

$1, $2, ... Positional parameters

read var Read user input into variable

$(command) Command substitution

$((expression)) Arithmetic operations

Interactive Shell Script


An interactive shell script communicates with the user
via input/output during execution. It prompts the user for
input and reacts based on the input provided.

Example: Interactive Calculator Script


#!/bin/bash

echo " Welcome to Simple Calculator"

# Prompt user for input


read -p "Enter first number: " num1
read -p "Enter second number: " num2

echo "Choose operation:"


echo "1) Add"
echo "2) Subtract"
echo "3) Multiply"
echo "4) Divide"

read -p "Enter your choice [1-4]: " choice

# Perform selected operation


case $choice in
1) result=$((num1 + num2))
echo "Result: $result"
;;
2) result=$((num1 - num2))
echo "Result: $result"
;;
3) result=$((num1 * num2))
echo "Result: $result"
;;
4)
if [ "$num2" -eq 0 ]; then
echo " Cannot divide by zero"
else
result=$((num1 / num2))
echo "Result: $result"
fi
;;

*) echo " Invalid choice" ;;


esac

How to Run This Script


1. Save it as calc.sh.
2. Give it execute permissions:
chmod +x calc.sh
3. Run it:
./calc.sh

Features Used

Feature Description

read Takes user input

case Selects action based on user choice

if Handles conditions like divide by zero

echo Outputs messages to the user

Using Command Line Arguments


Shell scripts can accept command line arguments,
allowing users to pass values when running the script.
These arguments are stored in positional parameters like
$1, $2, ..., $@.

Example: Simple Script with Arguments


#!/bin/bash

echo "First argument: $1"


echo "Second argument: $2"
echo "All arguments: $@"
echo "Total number of arguments: $#"
Save it as args.sh, then run:
./args.sh Hello World

Output:
First argument: Hello
Second argument: World
All arguments: Hello World
Total number of arguments: 2

Special Variables for Arguments

Variable Meaning

$1, $2, ... First, second, etc., argument

$# Number of arguments

$@ All arguments (individually quoted)

$* All arguments as a single string

$0 Script name

Example: Add Two Numbers from CLI


#!/bin/bash

if [ $# -ne 2 ]; then
echo "Usage: $0 num1 num2"
exit 1
fi

sum=$(( $1 + $2 ))
echo "Sum: $sum"
Run:
./add.sh 10 20
Output:
Sum: 30

Loop Through All Arguments


#!/bin/bash

echo "Listing all arguments:"


for arg in "$@"; do
echo "$arg"
done
Summary
• Use $1, $2, ... to access each argument.
• Use $# to count how many were given.
• Use $@ to loop through all arguments safely.
• Always check argument count for safety and error
messages.

Logical Operators (&&, ||)


In shell scripting (especially in bash), logical operators are
used to control the flow of execution based on the
success or failure of commands.

1. && – AND Operator


• Runs the second command only if the first
command succeeds (exit status = 0).
• Equivalent to "do this, and if successful, do that".
Example:
mkdir newdir && cd newdir

Explanation:
• cd newdir will only execute if mkdir newdir is
successful.

2. || – OR Operator
• Runs the second command only if the first
command fails (exit status ≠ 0).
• Equivalent to "try this, or if it fails, try that".

Example:
cd myfolder || echo "Folder does not exist"

Explanation:
• If cd myfolder fails, then it will print "Folder does not
exist".

3. Combined Use
mkdir testdir && cd testdir || echo "Failed to create and
enter directory"
• If mkdir succeeds and cd fails, it shows error
message.
• Combines both conditions.

4. In if Statements
if [ -f myfile ] && [ -r myfile ]; then
echo "File exists and is readable"
fi
• Checks both conditions before executing the block.

Summary Table

Operator Meaning Runs When…

&& Logical AND First command succeeds

` `

Condition Checking (if, case)


In shell scripting, conditional statements allow your
script to make decisions. Two common constructs are:

1. if Statement
Used to execute commands based on conditions.

Syntax:
if [ condition ]; then
commands
elif [ another_condition ]; then
other_commands
else
default_commands
fi
Use [ ... ] or [[ ... ]] for conditions.

Example 1: Check if a number is positive


#!/bin/bash
read -p "Enter a number: " num

if [ $num -gt 0 ]; then


echo "Positive number"
elif [ $num -lt 0 ]; then
echo "Negative number"
else
echo "Zero"
fi

Example 2: Check if file exists


#!/bin/bash
read -p "Enter filename: " file
if [ -f "$file" ]; then
echo "File exists"
else
echo "File not found"
fi

2. case Statement
Used to match a value against multiple patterns. Good
for menu-style options.
Syntax:
case $variable in
pattern1)
commands ;;
pattern2)
commands ;;
*)
default_commands ;;
esac

Example: Simple Menu


#!/bin/bash
echo "Choose an option: "
echo "1. Date"
echo "2. List files"
echo "3. Who's logged in"
read choice

case $choice in
1) date ;;
2) ls ;;
3) who ;;
*) echo "Invalid option" ;;
esac

Summary
Statement Use for Example

if Evaluate conditions [ $a -gt 5 ]

case Match values/patterns case $x in pattern)


Expression Evaluation in UNIX (test, [])
In UNIX shell scripting, you can evaluate conditions using:
• test
• [ ... ] (a synonym for test)
• [[ ... ]] (enhanced version in bash/ksh)

1. test and [...] Syntax


Both are functionally the same:
test EXPRESSION
# or
[ EXPRESSION ]
Important: Leave spaces around the brackets!

2. Types of Expressions

File Conditions

Expression Meaning

-f file True if file exists and is a regular file

-d file True if it's a directory

-e file True if file exists


Expression Meaning

-r file True if file is readable

-w file True if writable

-x file True if executable

Example:
if [ -f myfile ]; then
echo "File exists"
fi

String Conditions

Expression Meaning

-z string True if string is empty

-n string True if string is not empty

string1 = string2 True if strings are equal

string1 != string2 True if not equal

Example:
if [ "$name" = "admin" ]; then
echo "Welcome, admin"
fi
Integer Comparisons

Expression Meaning

a -eq b Equal

a -ne b Not equal

a -gt b Greater than

a -lt b Less than

a -ge b Greater or equal

a -le b Less or equal

Example:
if [ $x -gt 10 ]; then
echo "x is greater than 10"
fi

3. Enhanced Bash Expression: [[ ... ]]


• Allows more advanced pattern matching and does
not require quoting variables.
• Safer for strings with spaces.

Example:
if [[ $name == a* ]]; then
echo "Name starts with 'a'"
fi

Summary

Tool Purpose

test Evaluate expressions

[ ... ] Same as test, more common

[[ ... ]] Enhanced test in bash

Computation Using expr


The expr command in UNIX is used for integer arithmetic
and string operations in shell scripts.

1. Basic Arithmetic with expr

Addition
expr 5 + 3
# Output: 8

Subtraction
expr 10 - 4
# Output: 6

Multiplication (escape *)
expr 6 \* 2
# Output: 12

Division
expr 8 / 2
# Output: 4
Modulus (remainder)
expr 9 % 4
# Output: 1

2. Using with Variables


a=10
b=5

sum=$(expr $a + $b)
echo "Sum is: $sum"
Use $(...) or backticks `expr ...` to assign the result to
a variable.

3. String Length and Operations

Length of a string:
expr length "hello"
# Output: 5

Substring:
expr substr "universe" 2 3
# Output: "niv"

4. Comparison Operators
expr 5 = 5 # Returns 1 (true)
expr 5 != 3 # Returns 1
expr 5 \> 3 # Returns 1 (use \> to escape '>')
expr 2 \< 1 # Returns 0 (false)

Tips
• Always use spaces between operands and operators.
• Escape special characters (*, <, >) with \.
• expr works only with integers, not floating point
numbers.

Alternative: $(()) (Preferred in Bash)


a=5
b=3
echo $((a + b)) # Output: 8

Using expr for String Operations


The expr command can perform basic string operations
such as length calculation, substring extraction, and string
comparison.

1. String Length
expr length "hello"
# Output: 5
Or using a variable:
str="unixshell"
expr length "$str"

2. Substring Extraction
expr substr "universe" 2 3
# Output: "niv"
Syntax:
expr substr STRING POSITION LENGTH

Note: Position starts at 1 (not 0).


3. Index of Character
Find the position of the first occurrence of a character:
expr index "hello" l
# Output: 3
You can also search for multiple characters:
expr index "abcdef" bd
# Output: 2 (first match found is 'b' at position 2)

4. String Comparison
expr "apple" = "apple"
# Output: 1 (true)

expr "apple" = "orange"


# Output: 0 (false)

expr "a" \< "b"


# Output: 1 (true) — don't forget to escape `<` and `>`

5. Concatenation (Not directly supported)


expr does not directly support string concatenation, but
you can do it like this in shell:
str1="Hello"
str2="World"
echo "$str1$str2"
# Output: HelloWorld

Summary

Operation Example Output

Length expr length "abc" 3

Substring expr substr "hello" 2 3 "ell"

Index expr index "banana" a 2

Compare (equal) expr "a" = "a" 1

Compare (not equal) expr "a" != "b" 1

Loops (Bash): for, while


Loops are used in shell scripts to repeat actions multiple
times. Bash supports two primary loops:

1. for Loop

Syntax 1: Loop over a list


for item in val1 val2 val3
do
echo "Item: $item"
done
Example:
for name in Alice Bob Charlie
do
echo "Hello, $name!"
done

Syntax 2: C-style loop (Bash only)


for ((i=1; i<=5; i++))
do
echo "Number: $i"
done

2. while Loop
Runs as long as a condition is true.
count=1
while [ $count -le 5 ]
do
echo "Count: $count"
count=$((count + 1))
done

3. until Loop (opposite of while)


Executes until a condition becomes true.
x=1
until [ $x -gt 3 ]
do
echo "x is $x"
x=$((x + 1))
done

4. Looping Over Command Output


for file in $(ls *.txt)
do
echo "Processing file: $file"
done

Summary
Loop Type Use Case

for Iterate over list or range

while Repeat while a condition is true

until Repeat until a condition is true

Use of Positional Parameters


Positional parameters allow shell scripts to receive input
values from the command line, making scripts flexible
and dynamic.

What Are Positional Parameters?


They are special variables:
• $0, $1, $2, $3, ..., $9, ${10}, ${11}, etc.
Parameter Description

$0 Script name

$1–$9 First to ninth arguments

${10}+ Tenth and later arguments

$# Number of arguments

$@ All arguments (individually quoted)


Parameter Description

$* All arguments (as a single string)

Example Script: greet.sh


#!/bin/bash

echo "Script Name: $0"


echo "Hello, $1!"
echo "Your role is: $2"
echo "Total arguments: $#"
echo "All arguments: $@"
Run it:
./greet.sh Alice Developer
Output:
Script Name: ./greet.sh
Hello, Alice!
Your role is: Developer
Total arguments: 2
All arguments: Alice Developer
Looping Over All Parameters
for arg in "$@"
do
echo "Arg: $arg"
done

Common Use Cases


• File or directory names
• User input values
• Command-line tools (e.g., scripts like ./backup.sh
file1.txt file2.txt)
• Automation scripts that take dynamic input

Tips
• Always quote parameters like "$1" to avoid issues
with spaces.
• Use $# to validate input count.
• Use shift to move parameters (helpful in parsing
multiple inputs).
SYSTEM ADMINISTRATION
A UNIX System Administrator plays a critical role in
managing and maintaining UNIX-based systems. Their
responsibilities ensure that the systems are running
efficiently, securely, and reliably. Here are the essential
duties of a UNIX system administrator:

1. System Installation and Configuration


• Install and configure UNIX operating systems (e.g.,
AIX, Solaris, HP-UX, Linux).
• Set up hardware and software environments.
• Configure system services like DNS, DHCP, NFS, FTP,
SSH, etc.

2. System Monitoring and Performance Tuning


• Monitor system performance (CPU, memory, disk
usage).
• Use tools like top, vmstat, iostat, netstat, etc.
• Identify and resolve performance bottlenecks.

3. User and Access Management


• Create, modify, and delete user accounts and groups.
• Manage file permissions, ownership, and access
control (e.g., chmod, chown, umask).
• Implement authentication mechanisms and
password policies.

4. Filesystem and Storage Management


• Mount and manage file systems (e.g., UFS, ext4, ZFS).
• Monitor disk usage and quotas.
• Perform backups and restore operations using tools
like tar, rsync, dump, cron.

5. Security Management
• Apply patches and updates to the OS and software.
• Configure firewalls and access control lists (ACLs).
• Monitor for security breaches and maintain system
integrity.
• Implement security tools (e.g., SELinux, auditd).

6. Automation and Scripting


• Write shell scripts (Bash, Ksh, etc.) to automate
repetitive tasks.
• Schedule jobs using cron or at.
7. System Backup and Recovery
• Implement and monitor backup strategies.
• Test restore procedures regularly.
• Handle disaster recovery planning and execution.

8. Log Management and Troubleshooting


• Monitor and analyze system logs (/var/log/, syslog,
journalctl).
• Diagnose hardware and software issues.
• Respond to system failures and outages.

9. Software and Patch Management


• Install, update, and manage system software and
third-party applications.
• Track patch levels and apply updates using tools like
yum, apt, pkgadd, etc.

10. Documentation and Compliance


• Maintain detailed documentation of system
configurations, procedures, and changes.
• Ensure compliance with IT policies and standards
(e.g., data retention, security audits).

Starting up and shutting down UNIX


systems are fundamental tasks in UNIX system
administration. These processes must be handled
carefully to maintain system integrity, ensure service
availability, and prevent data loss. Here's an overview of
what’s involved in starting (booting) and shutting down a
UNIX system from an administrative perspective:

System Startup (Boot Process)


1. BIOS/UEFI and Bootloader
• The system firmware (BIOS or UEFI) initializes
hardware.
• The bootloader (e.g., GRUB, LILO) is loaded from the
Master Boot Record (MBR) or EFI partition.
• The bootloader loads the UNIX kernel into memory.
2. Kernel Initialization
• The kernel initializes hardware, mounts the root
filesystem, and starts the init process (or systemd on
modern systems).
3. Init/Systemd Execution
• Based on the init system:
o SysVinit: Reads /etc/inittab and executes scripts
in /etc/rc.d/ or /etc/init.d/.
o Systemd: Uses unit files stored in
/etc/systemd/system/ or /lib/systemd/system/.
4. Runlevels / Targets
• SysVinit: Boots into a predefined runlevel (e.g., 3 =
multi-user, 5 = GUI, 1 = single-user).
• Systemd: Boots into a "target" (e.g., multi-user.target,
graphical.target).
5. Service Startup
• Network, login services, database servers, etc., are
started automatically as defined in the boot scripts or
unit files.
6. User Login
• The system reaches the login prompt (console or
GUI).
• Users can log in via shell or desktop.

System Shutdown
1. Notify Users (Optional but Recommended)
• Use wall to broadcast shutdown notices to logged-in
users.
• wall "System going down for maintenance in 10
minutes"
2. Commands to Shut Down Safely
• Common commands:
• shutdown -h now # Halt immediately
• shutdown -r +5 "Rebooting in 5 minutes" # Reboot
with delay
• init 0 # Shutdown
• init 6 # Reboot
• poweroff # Power off
• reboot # Reboot
• halt # Halt system
3. Graceful Termination
• All users are logged off.
• System processes and services are stopped.
• Filesystems are unmounted cleanly.
• Logs are written and sync'd.

Notes for Admins


• Check for Active Users/Processes Before
Shutdown:
• who
• ps aux
• Prevent Shutdown Issues:
o Monitor for unmounted or read-only filesystems.
o Ensure no critical updates or backups are
running.
• Use Runlevels/Targets for Maintenance:
o Switch to single-user mode for maintenance:
o init 1
o # or
o systemctl isolate rescue.target

Brief idea about user account


management (username , password ,
home directory, groupid , diskquota ,
terminal etc.)
Here’s a brief overview of user account management in
UNIX, covering the key components system administrators
need to handle:

1. Username
• A unique identifier for each user.
• Created using:
• useradd <username>
• Stored in /etc/passwd.

2. Password
• Secures user access to the system.
• Set or changed using:
• passwd <username>
• Encrypted password stored in /etc/shadow.

3. Home Directory
• Default location where a user is placed after login.
• Stores personal files and settings (e.g., .bashrc,
.profile).
• Typically located at /home/<username>.
• Set during user creation:
• useradd -d /home/username username

4. Group ID (GID)
• Users belong to one primary group and can be added
to supplementary groups.
• Group information is stored in /etc/group.
• Set during user creation:
• useradd -g <primary_group> -G <other_groups>
<username>

5. Disk Quota
• Limits the amount of disk space a user can consume.
• Useful for multi-user systems to prevent abuse.
• Enable quotas on the filesystem, then assign limits:
• edquota <username>
• quota -u <username> # View usage

6. Terminal (Shell)
• Defines the command-line interface the user gets.
• Common shells: /bin/bash, /bin/sh, /bin/ksh,
/bin/zsh.
• Set during user creation:
• useradd -s /bin/bash <username>
• Can be changed later with chsh.
Important Files Involved

File Purpose

Basic user info (username, UID, GID, home


/etc/passwd
dir, shell)

/etc/shadow Encrypted passwords and aging info

/etc/group Group definitions

/etc/skel/ Template files copied to new home dirs

You might also like