Linux Important
Linux Important
Ratnakar Page 1
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 2
Abhisol : RED-HAT LINUX 6/7
Type of
Example User ID Group ID Home Directory Default Shell
User
Super User Root 0 0 /root /bin/bash
Normal users
Same as Same as
Sudo User with admin /home/<user name> /bin/bash
normal users normal users
privileges
Ratnakar Page 3
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 4
Abhisol : RED-HAT LINUX 6/7
19. What are the uses of .bash_logout, .bash_profile and .bashrc files?
.bash_logout : is a user's logout ending program file. It will execute first whenever the user is logout.
.bash_profile : is user's login startup program file. It will execute first whenever the user is login. It
consists the user's environmental variables.
.bashrc : This file is used to create the user's custom commands and to specify the umask values for
that user's only.
21. What is the command to check the user belongs to how many groups?
# groups <user name>
Ratnakar Page 5
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 6
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 7
Abhisol : RED-HAT LINUX 6/7
33. How to assign the password to normal user by him whenever first login to the system?
Whenever the user is created and that user is trying to login to the system, it will ask the password. If
the root user is not assign the password to that user, then that normal user can assign the password by his own
using the following commands.
# useradd <user name> (to create the user)
# passwd -S <user name> (to see the status of the password of that user. if root user is not
assigned the password then the password status is locked)
# passwd -d <user name> (then delete the password for that user)
# chage -d 0 <user name> (it will change the password age policy)
# su - <user name> (Try to switch to that user then it will display the following
message)
Newpassword : (type new password for that user)
Retype password : (retype the password again)
The other useful commands :
# w (this command gives the login user information like how many users currently login and full
information )
# who (to see users who are currently login and on which terminal they login)
# last (see the list of users who are login and logout since the /var/log/wtmp file was
created)
# lastb (to see the list of the users who tried as bad logins)
# lastreboot (to see all reboots since the log file was created)
# uptime (to see the information from how long the system is running, how many users login
and load average)
* The load average is from 1 sec : 5 secs : 15 secs
# df (to see the mounted partitions, their mount points and amount of disk space)
# du (to see the disk usage of the each file in bytes)
# uname -r (gives the current kernel version)
# last -x (It shows last shutdown date and time)
# last -x grep shutdown (only shutdown time shows ie., grep will filter the 'last -x' command)
* grep: It is used to search a word or sentence in file (ie., inside the file)
* find : It is used to search a command or file inside the system)
# cat /etc/shells or # chsh -l (to see how many shells that are supported by Linux)
/bin/sh -----> default shell for Unix
/bin/bash -----> default shell for Linux
/sbin/nologin -----> users cannot login shell
/bin/tcsh -----> c shell to write 'C++' language programs
/bin/csh -----> c shell to write 'C' language programs
# echo $SHELL (to see the current shell)
# chsh <user name> (to change the user's shell)
Changing shell for <user name> :
New shell : <type new shell for example /bin/sh to change the current shell>
New shell changed (But it will effect by restarting the server)
# date + %R (to display the time only)
# date + %x (to display the date only)
# history (to see the history of the commands)
#history -c (to clear the history)
# history -r (to recover the history)
* .bash_history is the hidden file to store the history of the user commands. By default history size is
1000.
# echo $HISTSIZE (to check the current history size)
Ratnakar Page 8
Abhisol : RED-HAT LINUX 6/7
# export HISTSIZE=500 (to change the current history size to 500 temporarily)
#export HISTTIMEFORMAT=" "%D" "%T" " (to display the date and time of each command
temporarily)
# vim /etc/bashrc (open this file go to last line and type as follows to make history i
size date & time formats permanently)
HISTSIZE=1000
HISTTIMEFORMAT=' %D %T '
(save and exit the file and to update the effects by #source /etc/bashrc command)
# ~<user name> (to go to users home directory)
# whatis <command> (to see the short description of that command)
# whereis <command> (to see the location of that command and location of the
I document of that command)
# reset (to refresh the terminal)
# whoami (to see the current user name)
# who a mi (to see the current user with full details like login time and others)
# passwd <user name> (to change the password of the user)
# id (to see the current user name, user id, group name and group id, .... etc.,)
# id <user name> (to see the specified user name, user id, group name and group id)
# su (to switch to root user without root user home directory)
# su - (to switch to root user with root user home directory)
# su <user name) (to switch to the specified user without his home directory)
# su - <user name> (to switch to the specified user with his home directory)
# lspci (to list all the PCI slots present in the system)
# du -sh /etc/ (to see the size of the /etc on the disk in KBs or MBs)
# ls -l (to see the long listing of the files and directories)
d rwx rwx rwx . 2 root root 6 Dec 17 18:00 File name
d -----> type of file
rwx -----> owner permissions
rwx -----> group permissions
rwx -----> others permissions
. -----> No ACL permissions applied
root ----> owner of the file
root ----> group ownership
6 -----> size of the file
Dec 7 18:00 -----> Date and Time of the created or modified
File name -----> File name of that file
# ls -ld <directory name> (to see the long listing of the directories)
# stat <file name/directory name> (to see the statistics of the file or directory)
35. What are permission types available in Linux and their numeric representations?
There are mainly three types of permissions available in Linux and those are,
read ----- r ----- 4 null permission ------ 0
write ----- r ----- 4
execute ----- r ----- 4
Ratnakar Page 9
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 10
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 11
Abhisol : RED-HAT LINUX 6/7
(i) In an organization the whole work is divided into departments for easy maintenance and easy
administration.
(ii) For each department is also represented as group and that group having so many users to do
different works.
(iii) So, if we create one group and assign that group to all the users in that department, then we can
easily identify which user belongs to which group.
(iv) We can share files, directories and execute some programs to that group and also give permissions
to that group. So, each user of that group can easily share those directories and also can easily
access, execute or even write in those shared files and directories.
RHEL - 7 :
(i) Restart the system.
(ii) Using arrow keys select 1st line and press 'e' to edit.
(iii) Go to Linux 16 line press End key or Ctrl + e to go to the end of the line and give one space.
(iv) Then type as rd.break console=tty1 selinux=0
(v) Then press Ctrl + x to start the computer in single user mode.
(vi) After starting we get swith_root :/# prompt appears and then type as follows.
(vii) # mount -o remount, rw /sysroot and press Enter and then type as follows.
(viii) # chroot /sysroot press Enter.
(ix) Then sh - 4.2 # prompt appears and type as
(x) sh - 4.2 #passwd root
New password : XXXXXX
Retype password : XXXXXX
(xi) sh - 4.2 # exit
(xii) switch-root :/# exit
(xiii) Then the system starts and the desktop appears.
Ratnakar Page 12
Abhisol : RED-HAT LINUX 6/7
(i) Profile is a file to enter some settings about users working environment. ie., we can set user home
directory, login shell, path, ...etc.,
Profiles are two types.
(a) Global profile
(b) Local profile
Global profile :
(1) Only root user can set and applicable to all the users.
(2) Only global parameters can entered in this profile.
(3) The location of the global profile is /etc/bashrc
Local profile :
(1) Every user has his/her own profile.
(2) The settings entered in this profile are only for that user.
(3) The location of the profile is .bash_profile (hidden file) in that particular user's home directory.
54. How to find the users who are login and how to kill them?
# fuser -cu (to see who are login)
#fuser -ck <user login name> (to kill the specified user)
Ratnakar Page 13
Abhisol : RED-HAT LINUX 6/7
59. What is the syntax to assign read and write permissions to particular user, group and other?
# setfacl -m u : <user name> : <permissions><file or directory>
# setfacl -m g : <user name> : <permissions><file or directory>
# setfacl -m o : <user name> : <permissions><file or directory>
60. What is the syntax to assign read and write permissions to particular user, group and other at a
time?
# setfacl -m u : <user name> : <permissions>, g : <user name> : <permissions>, o : <user name> :
<permissions><file or directory>
Useful commands :
# setfacl -x u : <user name><file or directory name> (to remove the ACL permissions from the user)
# setfacl -x g : <user name><file or directory name>(to remove the ACL permissions from group)
# setfacl -x o : <user name><file or directory name> (to remove the ACL permissions from other)
# setfacl -b <file or directory> (to remove all the ACL permissions on that file directory)
61. How will you lock a user, if he enters wrong password 3 times?
pam_tally.so module maintains a count of attempted accesses, can reset count on success, can deny
access if too many attempts fail. Edit /etc/pam.d/system-auth file, enter:
(i) # vi /etc/pam.d/system-auth
Modify as follows:
auth required pam_tally.so no_magic_root
account required pam_tally.so deny=3 no_magic_root lock_time=180
Where,
deny=3 : Deny access if tally for this user exceeds 3 times.
lock_time=180 : Always deny for 180 seconds after failed attempt. There is alsounlock_time=n option. It allow
access after n seconds after failed attempt. If this option is used the user will be locked out for the specified
amount of time after he exceeded his maximum allowed attempts. Otherwise the account is locked until the
lock is removed by a manual intervention of the system administrator.
magic_root : If the module is invoked by a user with uid=0 the counter is not incremented. The sys-admin
should use this for user launched services, like su, otherwise this argument should be omitted.
no_magic_root : Avoid root account locking, if the module is invoked by a user with uid=0
Save and close the file.
Ratnakar Page 14
Abhisol : RED-HAT LINUX 6/7
(i) Open the /etc/fstab file by # vim /etc/fstab command and goto the mount point entry
line and type as, /dev/sdb1 /mnt/prod ext4 defaults, usrquota 0 0
(save and exit this file)
(ii) Update the quota on mount point by # mount -o remount, usrquota <mount point>
command.
(iii) Create the user quota database by # quotacheck -cu <mount point> command (where -c
means created the quota database and -u means user quota).
(iv) Check whether the quota is applied or not by # mount command.
(v) Enable the quota by # quotaon <mount point> command.
(vi) Apply the user quota for a user by # edquota -u <user name><mount point>
command.
File system blocks soft hard inodes soft
hard
/dev/sdb1 0 0 0 0 0
0
blocks -----> No. of blocks used (already)
soft -----> Warning limit
hard -----> Maximum limit
0 -----> Unlimited usage
inodes -----> No. of files created (already)
* If soft=10 and hard=15 means after crossing the soft limit a warning message will be
displayed and if hard limit is also crosses then it won't allow to create the files for that user.
(save and exit the above quota editor)
# edquota -p <user name 1><user name 2> (to apply user name 1 quotas to user name 2, ie., no
need to edit the quota editor for user name 2)
(ii) Update the quota on mount point by # mount -o remount, usrquota, grpquota <mount point>command.
(iii) Create the user quota database by # quotacheck -cug <mount point> command (where -c means
created the quota database, -u means user quota and -g means group quota ).
(iv) Check whether the quota is applied or not by # mount command.
(v) Enable the quota by # quotaon <mount point> command.
(vi) Apply the user quota for a user by # edquota -g <group name><mount point> command.
File system blocks soft hard inodes soft
hard
/dev/sdb1 0 0 0 0 0
0
Ratnakar Page 15
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 16
Abhisol : RED-HAT LINUX 6/7
1. What is partition?
A partition is a contiguous set of blocks on a drive that are treated as independent disk.
2. What is partitioning?
Partitioning means to divide a single hard drive into many logical drives.
Ratnakar Page 17
Abhisol : RED-HAT LINUX 6/7
(p - primary) or e - extended) : p (type p for primary partition or type e for extended partition)
First cylinder : (press Enter for default first cylinder)
Last cylinder : + <size in KB/MB/GB/TB>
Command (m for help) : t (type t to change the partition id)
(for example: 8e for Linux LVM, 82 for Linux Swap and 83 for Linux normal partition)
Command (m for help) : w (type w tosave the changes into the disk)
# partprobe /partx -a/kpartx /dev/sdc1 (to update the partitioning information in partition table)
14. What are differences between the ext2, ext3, ext4 and xfs file systems?
S.No. Ext2 Ext3 Ext4
1. Stands for Second Stands for Third Extended Stands for Fourth
Extended file system. file system. Extended file system.
2. Does not having Supports Journaling Supports Journaling
Journaling feature. feature. feature.
3. Max. file size can be from Max. file size can be from Max. file size can be from
16 GB to 2 TB. 16 GB to 2 TB. 16 GB to 16 TB.
4. Max. file system size can Max. file system size can Max. file system size can
be from 2 TB to 32 TB be from 2 TB to 32 TB be from 2 TB to 1 EB
*1EB = 1024 Peta bytes.
Ratnakar Page 18
Abhisol : RED-HAT LINUX 6/7
/etc/fstab ----> is keeping information about the permanent mount points. If we want to
make our mount point permanent then make an entry about the mount point in this file.
/etc/fstab entries are:
1 2 3 4 5 6
device name mount point F/S type mount options Dump FSCK
16. The partitions are not mounting even though there are entries in /etc/fstab. How to solve this
problem?
First check any wrong entries are there in /etc/fstab file. If all are ok then unmount all the partitions by
executing the below command,
# umount -a
Then mount again mount all the partitions by executing the below command,
# mount -a
17. When trying to unmounting it is not unmounting, how to troubleshoot this one?
Some times directory reflects error while unmounting because,
(i) you are in the same directory and trying to unmount it, check with # pwdcommand.
(ii) some users are present or accessing the same directory and using the contents in it, check this with
# fuser -cu <device name> (to check the users who are accessing that partition)
# lsof <device name> (to check the files which are open in that mount point)
# fuser -ck <opened file name with path> (to kill that opened files)
Now we can unmount that partition using # umount <mount point>
Ratnakar Page 19
Abhisol : RED-HAT LINUX 6/7
27. How to check the integrity of a file system or consistency of the file system?
# fsck <device or partition name>command we can check the integrity of the file system.
But before running the fsck command first unmount that partition and then run fsck command.
28. What is fsck check or what are the phases of the fsck?
(a) First it checks blocks and sizes of the file system
(b) Second it checks file system path names
(c) Third it checks file system connectivity
(d) Fourth it checks file system reference counts (nothing but inode numbers)
(e) Finally it checks file system occupied cylindrical groups
29. Why the file system should be unmount before running the fsck command?
If we run fsck on mounted file systems, it leaves the file systems in unusable state and also deletes the
data. So, before running the fsck command the file system should be unmounted.
31. How to extend the root file system which is not on LVM?
By using # gparted command we can extend the root partition, otherwise we cannot extend
the file systems which is not on LVM.
Ratnakar Page 20
Abhisol : RED-HAT LINUX 6/7
34. How to know which file system occupy more space and top 10 file systems?
# df -h <device or partition name> | sort -r | head -10
39. How to find how many disk are attached to the system?
# fdisk -l (to see how many disk are attached to the system)
42. How to create the file systems with the user specified superblock reserve space?
# mkfs.ext4 -m <no.><partition name> (to format the partition with <no.>% of reserve space to
superblock)
Whenever we format the file system, by default it reserve the 5% partition space for Superblock.
Ratnakar Page 21
Abhisol : RED-HAT LINUX 6/7
Important Commands :
# fsck <partition name> (to check the consistency of the file system)
# e2fsck <partition name> (to check the consistency of the file system in interactive mode)
# e2fsck -p <partition name> (to check the consistency of the file system without interact mode)
# mke2fs -n <partition name> (to see the superblock information)
# mke2fs -t <file system type><partition name> (to format the partition in the specified filesys type)
# mke2fs <partition name> (to format the partition in default ext2 file system type)
# blockdev --getbs /dev/sdb1 (to check the block size of the /dev/sdb1 file system)
# fsck <device or partition name> (to check and repair the file system)
Note: Before running this command first unmount that partition then run fsck command.
# umount -a (to unmount all the file systems except ( / ) root file system)
# mount -a (to mount all the file systems which are having entries in /etc/fstab file)
# fsck -A (to run fsck on all file systems)
# fsck -AR -y (to run fsck without asking any questions)
# fsck -AR -t ext3 -y (to run fsck on all ext3 file systems)
# fsck -AR -t no ext3 -y (to run fsck on all file systems except ext3 file systems)
# fsck -n /dev/sdb1 (to see the /dev/sdb1 file system report without running fsck)
# tune2fs -l /dev/sdb1 (to check whether the journaling is there or not)
# tune2fs -j /dev/sdb1 (to convert ext2 file system to ext3 file system)
# tune2fs -l /dev/sdb1 (to check whether the journaling is added or not)
# tune2fs -O ^has_journal /dev/sdb1 (to convert ext3 file system to ext2 file system)
# tune2fs -O dir_index, has_journal, unit_bg /dev/sdb1 (to convert ext2 file system to ext4 file system)
# tune2fs -O extents, dir_index, unit_bg /dev/sdb1 (to convert ext3 file system to ext4 file system)
# mount -o remount, rw /dev/sdb1 (to mount the partition with read and write permissions)
# mount -o remount, ro /dev/sdb1 (to mount the partition with read only permissions)
# mount < directory name> (to check whether this directory is mount/ normal directory)
# dump2fs <device or partition name> (to check the metadata of the partition and repair the metadata)
# fdisk -l (to list total hard disks attached to system and their partitions)
# fuser -cu <device or partition name> (to see the users who are accessing that file system)
# fuser -cK <device or partition name> (to kill the users processes who accessing the file systems)
Note: Even though we kill those users processes sometimes we cannot unmount those partitions, so if this
situation arises then first see the process id's of the user opened files by
# lsof <mount point>
# kill -9 <process id> killthose processesforcefully
# journalctl (It tracks all the log files between two different timings and by default saved in /run/log )
* /run/log is mounted on tmpfs file system. ie., if system is rebooted, the whole information in that location
will be deleted or erased.
* We can change the location of the /run/log to another like /var/log/journal by
# mkdir -p /var/log/journal (to make a directory in /var/log location)
# chown root : systemd-journal /var/log/journal (to change the group ownership of /var/log/journal)
# chmod g+s /var/log/journal (to set the sgid on /var/log/journal)
# killall -URS1 systemd-journald (It is necessary to kill old /run/log process and the location of journal
messages is changed to /var/log/journal)
# journalctl -n 5 (to display last five lines of all the log files)
# journalctl -p err (to display all the error messages)
# journalctl -f (to watch journalctl messages continuously)
# journalctl --since<today> or <yesterday> (to see all the journalctl messages since today or yesterday)
# journalctl --since "date" --until "date" (to see the journal messages between the specified two dates)
# journalctl -pid=1 (to see the pid=1 process name)
# auditctl (to see the audit report).
2. Logical Volume Management and RAID Levels
Ratnakar Page 22
Abhisol : RED-HAT LINUX 6/7
(vi) Create a mount point to mount the above created LVM file system by,
# mkdir /mnt/<directory name>
(vii) Mount the LVM on the above created mount point temporarily by,
# mount /dev/<volume group name>/<logical volume name><mount point>or
Mount the LVM on mount point permanently by,
# vim /etc/fstab
/dev/<VG name>/<LV name> /mnt/<directory> <file system type> defaults 0
0
Esc+:+wq!
# mount -a
# df -hT (to see the mounted partitions with file system types)
4. How to see the details of the Physical Volumes?
# pvs (displays all physical volumes with less details)
# pvdisplay (displays all physical volumes with more details)
# pvdisplay <physical volume name> (displays the details of the specified physical volume)
Ratnakar Page 23
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 24
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 25
Abhisol : RED-HAT LINUX 6/7
Verify whether the physical volume is removed or not by # pvs or #pvdisplay command.
Example : # pvremove <pvname>
# pvs or #pvdisplay (to verify whether the physical volume is
removed or not)
14. How to restore the volume group which is removed mistakenly?
First unmount file system by # umount <file system mount point> command.
Check the volume group backup list by # vgcfgrestore --list <volume group name>command.
Then remove the logical volume by # lvremove </dev/vgname/lvname> command.
Copy the backup file which is taken backup before removed the volume group from the above backup list and
paste it in this command # vgcfgrestore -f <paste the above copied file name><vgname>
The logical volume is created automatically after restoring the volume group but the volume group and logical
volumes both will be in inactive state. So, check the state of the volume group by #vgscanand the logical
volume state by # lvscan commands.
Then activate that volume group by # vgchange -ay <volume group name>commandand activate the logical
volume by # lvchange -ay <logical volume name>command.
Mount the logical volume file system by # mount -a command.
Example : # umount <file system mount point>
# vgcfgrestore --list <volume group name> (copy the backup file from the
list)
# lvremove </dev/vgname/lvname>
# vgcfgrestore -f <paste the above copied file><volume group name>
# vgscan (to check the status of the
volume group)
# lvscan (to check the status of the
logical volume)
# vgchange -ay <volume group name> (activate the volume group if it is in
inactive state)
# lvchange -ay <logical volume name> (activate the logical volume if it is in
inactive state)
Note: The option a means active VG or LV and option y means yes.
# mount -a
15. How to change the volume group name and other parameters?
# vgrename <existing volume group name><new volume group name> (to rename the volume
group)
By default, unlimited logical volumes can be created per volume group. But we can control this limit by
# vgchange -l <no.><volume group> (to limit max. no. of logical volumes to the specified
number)
Example : # vgchange -l 2 <vgname> (to limit max. 2 logical volumes cab be created in this
volume group)
# vgchange -p <no.><volume group> (to limit max. no. of physical volumes to the specified
number)
Example : # vgchange -p 2 <vgname> (to limit max. 2 physical volumes can be added to this
volume group)
# vgchange -s <block size in no.><volume group> (to change the block size of the volume
group)
Example : # vgchange -s 4 <vgname> (to change the volume group block size to 4MB)
16. How to change the logical volume name and other parameters?
# lvrename <existing lvname><new lvname> (to rename the logical volume)
# lvchange -pr <logical volume> (to put the logical volume into read only mode)
# lvs (to see the logical volume permissions)
# lvchange -prw <logical volume> (to put the logical volume into read and write mode)
Ratnakar Page 26
Abhisol : RED-HAT LINUX 6/7
20. What are the locations of the logical volume and volume groups?
# cd /etc/lvm/backup (the logical volumes backup location)
# cd /etc/lvm/archive (the volume groups backup location)
23. How to extend the logical volume to max. disk space and half of the disk space?
# lvextend -l +100% FREE <logical volume> (to extend the logical volume by adding the
volume group's total available space)
# lvextend -l 50% <vgname><lvname> (to extend the logical volume by adding the 50%
free space of the volume group)
24. How to check on which physical volume the data is writing in the logical volume?
# lvdisplay -m ( to check on which physical volume the data is currently writing from all
logical volumes)
# lvdisplay -m <lvname> (to check on which physical volume the data is writing from the
Specified logical volume)
26. How to scan and detect the luns over the network?
# ls /sys/class/fc_host (to check the available fibre channels)
# echo "---" > /sys/class/scsi_host/<lun no.>/scan (to scan and detect the luns over the network)
Ratnakar Page 27
Abhisol : RED-HAT LINUX 6/7
point)
# cd /mnt/pendrive (to access the pen drive)
29. How to mount the " .iso " image files in Linux?
# mount -t iso9660 /root/rhel6.iso /iso -o ro, loop (to mount the .iso image files)
# cdrecord /root/Desktop/rhel6.iso (to write the CD/DVD ROM. Before executing this
command put the empty CD/DVD into CD/DVD drive)
# eject (to eject the CD/DVD drive tray)
# eject -t (to insert and close the CD/DVD drive tray)
30. What is RAID? What is the use of the RAID and how many types of RAIDs available?
RAID stands for Redundant Array of Independent Disks.
It provides fault tolerance, load balancing using stripping, mirroring and parity concepts.
There are mainly two types of RAIDs available.
(i) Hardware RAID (Depends on vendors and also more expensive)
(ii) Software RAID (Does not depends on vendors and less expensive when compared to Hardware
RAID and also it is maintained by system administrator only.
31. How many types of software RAIDs available and their requirements?
(i) RAID - 0 ---- Stripping ---- Minimum 2 disks required
(ii) RAID - 1 ---- Mirroring ---- Minimum 2 disks required
(iii) RAID - (1+0) --- Mirroring + Stripping ---- Minimum 4 disks required
(iv) RAID - (0+1) --- Stripping + Mirroring ---- Minimum 4 disks required
(v) RAID - 5 ---- Stripping with parity ---- Minimum 3 disks required
1 2
3 Disk - 1 Disk - 2
4
If the Disk - 1 is 5 /dev/sdb and the Disk - 2 is /dev/sdc then,
6
# mdadm - Cv /dev/md0 -n 2 /dev/sdb /dev/sdc
-l 0 (to create the RAID - 0 using disk - 1 and disk - 2)
# cat /proc/mdstat
(to check the RAID - 0 is created or not)
# mkfs.ext4 /dev/md0 (to create the ext4 file system on the RAID - 0)
# mkdir /mnt/raid0 (to create the RAID - 0 mount point)
# mount /dev/md0 /mnt/raid0 (to mount RAID - 0 on the mount point)
# mdadm -D /dev/md0 (to see the details of the RAID - 0 partition)
# mdadm /dev/md0 -f /dev/sdb (to failed the disk manually)
# mdadm /dev/md0 -r /dev/sdb (to remove the above failed disk)
# mdadm /dev/md0 -a /dev/sdd (to add the new disk in place of failed disk)
# umount /mnt/raid0 (to unmount the raid file system)
# mdadm --stop /dev/md0 (to stop the RAID - 0 volume)
# mdadm /dev/md0 --add /dev/sde (to add third disk to the RAID - 0 volume)
# mdadm --grow /dev/md0 --raid_device=3 (to grow the RAID - 0 file system)
Ratnakar Page 28
Abhisol : RED-HAT LINUX 6/7
1 1
2 2
3 3 Disk - 1 Disk - 2
If the Disk - 1 is 4 4 /dev/sdb and the Disk - 2 is /dev/sdc then,
# mdadm - 5 5 Cv /dev/md0 -n 2 /dev/sdb /dev/sdc -l 1 (to
create the RAID - 1 6 6 using disk - 1 and disk - 2)
# cat /proc/mdstat (to check the RAID - 1
is created or not)
# mkfs.ext4 /dev/md0 (to create the ext4 file system on the RAID - 1)
# mkdir /mnt/raid1 (to create the RAID - 1 mount point)
# mount /dev/md0 /mnt/raid1 (to mount RAID - 1 on the mount point)
# mdadm -D /dev/md0 (to see the details of the RAID - 1 partition)
# mdadm /dev/md0 -f /dev/sdb (to failed the disk manually)
# mdadm /dev/md0 -r /dev/sdb (to remove the above failed disk)
# mdadm /dev/md0 -a /dev/sdd (to add the new disk in place of failed disk)
# umount /mnt/raid1 (to unmount the raid file system)
# mdadm --stop /dev/md0 (to stop the RAID - 1 volume)
# mdadm /dev/md0 --add /dev/sde (to add third disk to the RAID - 1 volume)
# mdadm --grow /dev/md0 --raid_device=3 (to grow the RAID - 1 file system)
2 1+2
1 Disk - 1 Disk
3 4
-2 Disk - 3+4 3
5+6 5
If the Disk - 1 is 6 /dev/sdb, the Disk - 2 is /dev/sdc
and Disk - 3 is /dev/sddthen,
# mdadm - Cv /dev/md0 -n 2 /dev/sdb
/dev/sdc -l 5 (to create the RAID - 5 using disks - 1, 2 and 3)
# cat /proc/mdstat (to check the RAID - 5 is created or not)
# mkfs.ext4 /dev/md0 (to create the ext4 file system on the RAID - 5)
# mkdir /mnt/raid5 (to create the RAID - 5 mount point)
# mount /dev/md0 /mnt/raid5 (to mount RAID - 5 on the mount point)
# mdadm -D /dev/md0 (to see the details of the RAID - 5 partition)
# mdadm /dev/md0 -f /dev/sdb (to failed the disk manually)
# mdadm /dev/md0 -r /dev/sdb (to remove the above failed disk)
# mdadm /dev/md0 -a /dev/sde (to add the new disk in place of failed disk)
# umount /mnt/raid5 (to unmount the raid file system)
# mdadm --stop /dev/md0 (to stop the RAID - 5 volume)
Ratnakar Page 29
Abhisol : RED-HAT LINUX 6/7
# mdadm /dev/md0 --add /dev/sdf (to add fourth disk to the RAID - 5 volume)
# mdadm --grow /dev/md0 --raid_device=4 (to grow the RAID - 5 file system)
36. How will you troubleshoot if one of the eight disks failed in LVM?
First umount the file system and add the new disk with same size of the failed disk to the volume
group. Then move the data from failed physical volume to newly added physical volume and then
remove the failed physical volume from the volume group. And finally mount the file system.
38. How to inform the client and then troubleshoot if the disk is full?
First check which files are accessing more disk space by #du -h |sort - r command. if any temporary
and junk files are present remove them from the disk to make a room for new or updated data. Then
inform the actual situation to the client, take the permission from the client to get the lun
from storage and extend the file system by adding that lun to the LVM.
40. I have four disks each 1TB in RAID - (1+0). So, total how much disk space can I utilize in that RAID –
(1+0)? RAID - (1+0) means Mirroring + Stripping. It requires 4 disks, ie., 2 disks for mirroring and
remaining 2 disks for stripping. And 5 - 10% disk space is used for superblock information. So,
finally we can utilize 2TB - 2TB X 10% disk space in that RAID - (1+0).
41. If two disks failed in RAID - (1+0), can we recover the data?
The RAID - (1+0) requires minimum 4 disks and it uses Mirroring + Stripping. If one disk is failed we can
recover the data, but if two disks are failed we cannot recover the data.
42. How many types of disk space issues can we normally get?
(i) Disk is full.
(ii) Disk is failing or failed.
(iii) File system corrupted or crashed.
(iv) O/S is not recognizing the remote luns when scanning, ...etc.,
Ratnakar Page 30
Abhisol : RED-HAT LINUX 6/7
soft link file and the original file inode no's are different. The size of the soft link file is same as
the length of the original file name. The soft link can be created by
# ln -s <original file or directory><link file or directorywith path> (to create a soft link)
# ln -s /root/script /root/Desktop/script (to create a link file for the script and stored on root
Desktop)
Examples :
# find / -name <file name> (to search for file names in / directory)
# find / -name <file name> -type f (to find file names only)
# find / -name <directory name> -type d (to find directories with small letters only)
# find / -iname <file/directory name> -t d (to search for small or capital letter
files/directories)
Ratnakar Page 31
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 32
Abhisol : RED-HAT LINUX 6/7
2. What is Networking?
It is a connection between two or more computers to communicate with each other.
3. what are the basic requirements for networking?
(a) NIC (Network Interface Card or controller)
(b) Media (nothing but cables)
(c) Topology
(d) Protocol
(e) IP Addresses
4. Explain about NIC card?
A Network Interface Card or controller is hardware component that connects a computer to a
computer network. Each NIC card will be having MAC (Media Access Controller) address to avoid
conflicts between same NIC adapters. In Linux these NIC adapter is represented by the word "eth" . For
example if two NIC cards are there in a system then it will be denoted as "eho","eth1", .....etc.,
5. What is media?
Media is nothing but cable to connect two or systems. Example : RJ 45, CAT 5 and CAT 6, ....etc.,
6. What is topology?
Topology is a design in which the computers in network will be connected to each other. Example for
topologies are Bus, Ring, Star, Mesh, Tree topologies.
7. What is protocol?
A Network Protocol defines rules and conventions for communication between the network devices.
Protocols are generally use packet switching techniques to send and receive messages in the form of
packets.
Example for protocols are TCP/IP (Transmission Control Protocol and Internet Protocol), UPD (User
Datagram Protocol) and HTTP (Hyper Text Transfer Protocol), ....etc.,
8. What are the differences between TCP/IP and UDP protocols?
TCP/IP UDP
Transmission Control Protocol User Datagram Protocol
It is connection oriented It is connection less
Reliable Non-Reliable
TCP Acknowledgement will be sent / received No Acknowledgement
Slow communication Fast communication
Protocol No. for TCP is 6 Protocol No. for UDP is 17
HTTP, FTP, SMTP, ....etc., uses TCP DNS, DHCP, ....etc., uses UDP
9. What is an IP address?
Every Computer will be assigned an IP address to identify each one to communicate in the network.
The IP address sub components are Classes of an IP address, Subnet masks and Gateway.
Classes of IP address :
The IP addresses are further divided into classes. The classes are A, B, C, D, E and the ranges are given
below.
Start End Default Subnet mask Classless Inter Domain Routing
Class
Class A 0.0.0.0 127.255.255.255 255.0.0.0 /8
Class B 128.0.0.0 191.255.255.255 255.255.0.0 /16
Class C 192.0.0.0 223.255.255.255 255.255.255.0 /24
Class D 224.0.0.0 239.255.255.255
Class E 240.0.0.0 255.255.255.255
Ratnakar Page 33
Abhisol : RED-HAT LINUX 6/7
It is two types.
IPV4 :(It is divided into 4 parts )
It is divided into 6 parts. --- . --- . --- . --- (each 8 bits. So, 8 X 4 = 32 bits
--- : --- : --- : --- : --- : --- (each 8 bits. So, 8 X 6 = 48 bits IPV6 : ( It is divided into 16 parts )
--- . --- . --- . --- . --- . --- . --- . --- . --- . --- . --- . --- . --- . --- .
--- . --- (each 8 bits. So, 8 X 16 = 128 bits.
ifconfig (to see the MAC address) # ifconfig (to see the IP address)
16. How many types of NIC cards available?
(a) eth0 (1st NIC card)
(b) eth1 (2nd NIC card)
(c) br0 (Bridge -----> used for communication from physical to virtual)
(d) lo (loopback device name and IP address is 127.0.0.1)
# ifconfig (to see all the NIC devices connected to the system)
17. How many types of cable connections available?
(i) Cross cable (to connect two systems directly)
(ii) Straight cable (to connect more systems with the help of switch)
# ethtool <device name> (to check the network cable is connected or not)
# miitool<device name> (It is also used to check the network cable but it will not supports
RHEL - 7 and only supports RHEL - 6 and it also works on physical system only not on
virtual system)
18. In how many ways we can configure the network?
There are two ways to configure the network.
(a) Static Network.
(b) Dynamic Network.
Static Network :
Ratnakar Page 34
Abhisol : RED-HAT LINUX 6/7
In this way we assign the IP address and hostname manually. Once we configure the IP address, it will
not change.
Dynamic Network :
In this way we assign the IP address and hostname dynamically. This means the IP address will change
at every boot.
19. How to assign the static IP address to the NIC card?
In RHEL - 6 :
# setup
(Move the cursor to Network configuration and press Enter key)
(Move the cursor to Device configuration and press Enter key)
(Select the NIC adapter ie., eth0 and press Enter key)
(Assign the above IP address and other details as per our requirements and move the cursor to "OK"
and press
Enter key)
(Move the cursor to "Save" to save the changes in device configuration and press Enter key)
(Once again move the cursor to "Save & Quit" button and press Enter key)
(Finally move the cursor to "Quit" button and press Enter key to quit the utility)
(Then restart the network service and check for the IP address by # service network restart
command)
(If the change is not reflected with the above service, then restart the network manager by
# service NetworkManager
restart command)
# ifconfig (to see the IP address of the NIC card)
# ping < IP address > (to check whether the IP is pinging or not)
In RHEL - 7 :
# nmcli connection show (to see all the network connections)
# nmcli device show (to see the network details if already configured manually or
dynamically)
# nmcli connection add con-name "System eth0" ifname eth0 type ethernet (to add the network
connection)
# nmcli connection modify "System eth0" ipv4.addresses ' < IP address >/< netmask >< gateway > '
ipv4.dns < dns server IP address > ipv4.dns-search < domain name> ipv4.method <static
or manually> (to assign IP address, gateway, dns, domain name and configure the network as
static or manually)
# nmcli connection up "System eth0" (to up the connection)
# systemctl restart network (to restart the network service)
# systemctl enable network (to enable the network service)
# ifconfig (to see the IP address of the NIC card)
# ping < IP address > (to check whether the IP is pinging or not)
20. What are the differences between RHEL - 6 and RHEL - 7 network configuration files?
RHEL - 6 RHEL - 7
/etc/sysconfig/network-scripts is the directory which /etc/sysconfig/network-scripts is the directory which
contains the NIC configuration information. contains the NIC configuration information.
/etc/sysconfig/network-scripts/ifcfg-<device name> is /etc/sysconfig/network-scripts/ifcfg-<device name> is
the file which contains the NIC configuration details. the file which contains the NIC configuration details.
/etc/resolve.conf is the file which contains DNS server IP /etc/resolve.conf is the file which contains DNS server IP
and domain name location. and domain name location.
/etc/sysconfig/network is the hostname configuration
/etc/hostname is the hostname configuration file.
file.
/etc/hosts is the file which contains the local DNS server /etc/hosts is the file which contains the local DNS server
IP address. IP address.
21. What are the differences between Dynamic and Static configuration information?
Dynamic configuration information Static configuration information
Device =<NIC device name> Device =<NIC device name>
HWADDR=02:8a:a6:30:45 HWADDR=02:8a:a6:30:45
Ratnakar Page 35
Abhisol : RED-HAT LINUX 6/7
RHEL - 7 :
# hostname <fully qualified domain name> (to set the hostname temporarily)
# hostnamectl set-hostname <fully qualified domain name>(to set the hostname permanently)
# systemctl restart network (to update the hostname in the
network)
# systemctl enable network (to enable the connection at
next reboot)
23. How to troubleshoot if the NIC is notworking?
(a) First check the NIC card is present or not by # ifconfig command.
(b) If present thencheck the status of the NIC card is enabled or disabledby click on System menu
on the status bar, then select Network Connections menu.
(c) Click on IPV4 settings tab, select the device eth0 or any other and select Enable button, then
Apply and OK.
(d)Open /etc/sysconfig/network-scripts/ifcfg-eth0 file check Userctl=yes or no. If it is yes make it as
no, then check Onboot= yes or no. If it is no make it as yes and save that file.
(e) If not present thencheck the status of the NIC card is enabled or disabled by click on System menu
on the status bar, then select Network Connections menu.
(f) Click on IPV4 settings tab, select the device eth0 or any other and select Enable button, then Apply
and OK.
(g) Using # setup (in RHEL - 6) or # nmcli (in RHEL - 7) commands assign the IP address to the
system and restart the network service by # service network restart (in RHEL - 6) or #
systemctl restart network (in RHEL - 7) commands and enable the service at next reboot by #
chkconfig network on (in RHEL - 6) or # systemctl enable network (in RHEL - 7) commands.
(h) Then up the connection by # ifconfig eth0 up (in RHEL - 6) or # nmcli connection up
<connection name> commands.
(i) Even though it is not working may be the fault in NIC card. If so, contact the hardware vendor by
taking the permissions from higher authorities.
24. What is bonding and how to configure bonding? (from RHEL - 6)
What is link aggregation or bridging or teaming and how to configure teaming? (from RHEL - 7)
Bonding or Teaming or Bridging:
Collection of multiple NIC cards and make them as single connection (virtual) NIC card is called
bonding.
It is nothing but backup of NIC cards.
In RHEL - 6 it is called as Bonding or Bridging.
In RHEL - 7 it is called as Teaming or Link aggregation.
There are 3 types of backup in Bonding or Teaming.
(a) Mode 0 -----> Round Robbin
(b) Mode 1 -----> Activebackup
(c) Mode 3 -----> Broadcasting
Ratnakar Page 36
Abhisol : RED-HAT LINUX 6/7
Mode 0 :
It provides load balancing and fault tolerance.
Data will be shared by both NIC cards in round robbin.
If one NIC card failed then another NIC card will be activated to communicate with the server
So, there is a load balancing and fault tolerance features.
Mode 1 :
Activebackup means only one NIC card is activated at a time and another one is in down state.
So, there is no load balancing.
But if one NIC card is failed then another NIC card will be activated automatically.
Mode 3 :
In this mode broadcasting is done.
In this the same data will be transferred through two NIC cards.
So there is no load balancing.
But if one NIC card is failed then second NIC card will be activated automatically.
So, all the 3 modes are supports only fault tolerance, but round robbin is the only one mode that
provides load balancing.
Requirements to configure :
(i) Minimum two NIC cards.
(ii) One IP address.
(iii) Connection type is bond (in RHEL - 6) and team (in RHEL - 7) not the ethernet type.
Here no need to assign the IP addresses for two NIC cards and we are giving only one IP
address to bond or team.
Bonding configuration : (in RHEL - 6)
(i) # vim /etc/sysconfig/network-scripts/ifcfg-bond0
DEVICE=bond0
IP ADDR=<IP address>
TYPE=ethernet
NETMASK=255.225.225.0 or <IP address class netmask>
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=yes
BONDING_OPTS="mode0 or mode1 or mode3 miimon=50" (Save and exit this file)
(ii) vim /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes (Save and exit this file)
(iii) vim /etc/sysconfig/network-scripts/ifcfg-eth2
DEVICE=eth2
ONBOOT=yes
BOOTPROTO=none
USERCTL=no
MASTER=bond0
SLAVE=yes (Save and exit this file)
(iv) To add virtual NIC cards eth1 and eth2 :
# setup -----> Networking -----> Device configuration -----> New Device -----> eth1
Name : eth1
Device : eth1 (save and exit this setup)
# setup -----> Networking -----> Device configuration -----> New Device -----> eth2
Name : eth2
Device : eth2 (save and exit this setup)
(v) Adding bond0 connection :
# setup -----> Networking -----> Device configuration -----> New Device -----> bond0
Name : bond0
Ratnakar Page 37
Abhisol : RED-HAT LINUX 6/7
Device : bond0
IP address : <IP address>
Netmask : 255.255.255.0
Default gateway : <gateway IP address> (save and exit this setup)
# ifdown bond0
# ifdown eth1
# ifdown eth2
# ifup bond0
# service NetworkManager stop
# service network restart
# chkconfig network on
# service NetworkManager restart
# cat /proc/net/bonding/bond0 (to check the bonding
information)
# watch -n 1 cat /proc/net/bonding/bond0 (to check the bonding information for
every 1 minute)
# echo "eth1" > /sys/devices/virtual/net/bond0/bonding/active_slave (to put the eth1 NIC
in active state)
Teaming configuration :
(i) Add the team0 connection by
# nmcli connection add con-name team0 ifname team0 type team
config ' { "runner" : { "name" : "roundrobbin" or "activebackup" or
"broadcasting" }} '
(ii) Add the two NIC cards one by one to the above created connection by
# nmcli connection add con-name port1 ifname eth1 type team-slave master team0
# nmcli connection add con-name port2 ifname eth2 type team-slave master team0
(iii) Assign the static IP address to the team0 connection by
#nmcli connection modify team0 ipv4.addresses <IP address>/<netmask> ipv4.method
static
(iv) Up the connection by
# nmcli connection up team0
(v) To see the team0 connection up details by
# teamdctl team0 state
(vi) To check the connection communication by
# ping -I team0 <IP address>
(vii) To down the one NIC card in team0 by
# nmcli connection down port1
(viii) teamdctl team0 state (to check the team0 NIC card up or down details)
25. What is the difference between TCP and UDP protocol?
TCP is a connection oriented protocol and contain the information of sender as well as receiver.
Example : HTTP, FTP, Telnet
TCP is slower than UDP due to its error checking mechanism
UDP protocols are connection less packets have no information to where they are going. These type of ports are
generally used for broadcasting.
For example : DNS, DHCP
UDP are faster
26. What are the benefits of NIC Teaming?
(i) Load balancing
(ii) Fault Tolerance
(iii) Failover
27. Mention all the network configuration files you would check to configure your ethernet card?
(i) /etc/sysconfig/network-scripts/ifcfg-eth*
(ii) /etc/sysconfig/network
(iii) /etc/resolve.conf
(iv) /etc/nsswitch.conf
28. What is the use of /etc/resolve.conf?
Ratnakar Page 38
Abhisol : RED-HAT LINUX 6/7
It contains the details of nameserver, i.e., details of your DNS server which helps us connect to
Internet.
29. What is the use of /etc/hosts file?
To map any hostname to its relevant IP address.
30. What is the command to check all the open ports of your machine?
#nmap localhost
31. What is the command to check all the open ports of remote machine?
# nmap <IP address or hostname of the remote system>
32. What is the command to check all the listening ports and services of your machine?
# netstat -ntulp
33. How can you make a service run automatically after boot?
# chkconfig <service name> on
34. What are the 6 run levels of linux? And how can you configure your script to run only when the
system boots into GUI and not to any other runlevel?
0 Power off
1 Single user
2 Multi user without network
3 Multiuser with network
4 Development purpose
5 GUI
6 Restart
# chkconfig --level 5 service_name on
# chkconfig --level 1234 service_name off
35. What is a 3 way handshake protocol? Give an example of it.
SYN - system 1 sends SYN signal to remote system.
SYN-ACK - remote system receives the syn signal and sends ack signal.
ACK - system again receives ack signal from remote system and connection is established.
For Example: When you ping to a machine you are sending a SYN signal which is ACK by the remote
machine then it sends a SYN ACK signal back to the host machine. Then the host machine receives SYN
ACK and sends the ACK signal back to confirm the same.
36. What are the possible ways to check if your system is listening to port 67?
# nmap localhost | grep 67
# netstat -ntulp | grep 67
37. Explain about IPV6?
It's length is 128 bits. It's netmask is 64
# nmcli connection modify "System eth0" ipv6.addresses 2005:db8:0:1::a00:1/64 ipv6.method static
(to add the IPV6 version of IP address to the connection
"System eth0" )
# nmcli connection modify "System eth0" ipv4.addresses '172.25.5.11/24 172.25.5.254' ipv4.dns
172.25.254.254 ipv4.dns-search example.com ipv4.method static ipv6. addresses 2005:ac18::45/64
ipv6.method static (to assign ipv4 and ipv6 IP addresses to "System eth0
connection)
# nmcli connection down "System eth0" (to down the "System eth0" connection)
# nmcli connection up "System eth0" (to up the "System eth0" connection)
38. How to troubleshoot if the network is not reaching?
(i) First check the network cable is connected or not by # ethtool <NIC device name>
command. if connected then check the IP address is assigned or not by # ifconfig <NIC device name>
command.
(ii) Then check the system uptime by # uptime command.
(iii) Then check the network services status by # service network status and # service
NetworkManager status commands.
(iv) Then check the network service at Run Level by # Chkconfig --list network command.
(v) Then check whether the source network and destination network are in the same domain or
not.
(v) Then finally check the routing table by # route -n command.
Ratnakar Page 39
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 40
Abhisol : RED-HAT LINUX 6/7
(to modify the connection as static and assign the IP, gateway, dns IP,
domain name)
# nmcli connection delete <connection name> (to delete the specified connection)
# nmcli connection modify <connection name> ipv4.method <static/manual> (to modify dynamic
connection
to static connection)
# nmcli connection up <connection name> (to activate or up the specified connection)
# nmcli connection down <connection name> (to disable or down the specified connection)
# nmcli connection show <connection name> (to see the information about the specified NIC
device)
# ping -I <NIC device name><IP address> (to check the connection from NIC device to IP address)
# hostname <fully qualified domain name>(to set the hostname temporarily)
# hostnamectl set-hostname <fully qualified domain name> (to set the hostname permanently in
RHEL - 7)
NOTE: Whenever we change any parameters in /etc/sysconfig/network-scripts/ifcfg-<NIC device
name> file, then we have to reload that file and again we have to up the connection (nothing but activate the
connection by # nmcli connection reload command.
# nmcli connection reload (to reload the configuration of the connection if any changes on it and it
reloads all
configuration files)
# nmcli connection reload /etc/sysconfig/network-scripts/ifcfg-<NIC device name> (to reload a
single file)
# hostnamectl status (it displays full details of the hostname and works in
RHEL - 7 only)
# nmcli networking off (to disable all the connections at a time)
# nmcli device status (to display all NIC device connections statuses)
# nmcli connection modify <connection name> + ipv4.dns <secondary dns server IP> (to add a
secondary dns server IP to
the existing connection)
# netstat -ntulp (to check how many open ports are there in local
system)
# ss -ntulp ( " "
)
# nmap (to check how many open ports are there in
remote system)
# tracepath (it displays the routing information)
# miitool <NIC device name> (to check the network cable is connected or not)
# ethtool <NIC device name> ( " "
)
# ifconfig (to check the NIC card is enable or not)
# ifup <NIC device name> (to enable or up the NIC card)
#ifdown <NIC device name> (to disable or down the NIC card)
# route -n (to check the gateway)
# cat /etc/resolve.conf (to check the dns server information)
# cat /etc/sysconfig/network-scripts/ifcfg-<NIC device name> (to see the NIC device information)
# hostname or cat /etc/sysconfig/network(to check the hostname in RHEL - 6)
# hostnamectl status or cat /etc/hostname (to check the hostname in RHEL - 7)
# ping <IP address> (to check the connection communication)
# chkconfig --list (to list all the services which are running at boo
time in RHEL - 6 & 7)
# systemctl list-unit-files (to list all the processes which are running at boot time in
RHEL - 7)
# chkconfig --level <service name> (it will set the service at run level 3 when the system is
booting)
# service --status-all (to see the list of all the processes which are currently
running)
Ratnakar Page 41
Abhisol : RED-HAT LINUX 6/7
# ls /etc/init.d (is the location of all the services and deamons in RHEL -
6)
# ls /usr/lib/systemd/system (is the location of all the services and deamons in RHEL -
7)
# /etc/rc.local (is the last script to be run when the system is booting)
(If we enter as sshd stop at the last line of the script file then sshd will be stopped even
though that
sshd is enabled)
# service sshd status (to check the sshd status)
# service --service -all (to see the process ID of all the services)
# netstat -ntulp (to see all the services with port no., status, process ID
and all open ports in local system, routing table and NIC
device information)
-n -----> port no. (numeric no) -t ----->tcp protocol
-u -----> upd protocol -l -----> port is listening or
not
-p -----> display the process ID
# netstat -r (to see all routing table information)
# netstat -i (to see all the NIC cards information)
# nmap (to see the network mapping ie., open ports list on
remote system)
Note : By default this command will not available. So, first install the nmap package by # you install
nmap -y
# nmap <remote system IP address> (to see all the services which are running in the specified
remote system)
# nmap <remote IP 1><remote IP 2><remote IP 3> (to see the running services on specified remote
systems)
# nmap 172.25.0.11 - 50 (to see the running service on 172.25.0.11 to 172.25.0.50
systems)
# nmap -p 80 <remote IP> (to see the http port is running or not on specified remote system)
# nmap -p 80 - 90 <remote IP> (to see port no's 80 to 90 are running or not on remote systems)
# nmap -sp 172.25.0.0/24 (to see all the systems which are in upstate ie., 172.25.0.1,
172.25.0.2,
(where s -- scan & p -- ping) 172.25.0.3, ......upto 172.25.0.254
systems)
Open a file, write all the systems IP addresses, save & exit the file. Example has given below,
# vim coss
172.25.2.50
172.25.3.50
172.25.4.50 ....etc., (save and exit this file)
# nmap -iL coss (to scan all the IP addresses by reading the coss file)(where -i ---->
input, -L ----> list)
# nmap --iflist (to see all the routing table information in the network)
# nmap 172.25.0.10 - 20 --exclude 172.25.0.15 (to scan all the systems from 172.25.0.10 to
172.25.0.20 systems and excluding
172.25.0.15 system)
# nmcli connection show --active (to control the network connections)
# ip link (to check the network connection)
# ping -I eth1 <IP address> (to check the 2nd NIC card connection)
Ratnakar Page 42
Abhisol : RED-HAT LINUX 6/7
5. Managing SELinux
1. What is SELinux?
It is a one type of security that enhances the security that allows users and administrators more
control over which users and applications can access which resources, such as files, Standard Linux access
controls etc.,
It is mainly used to protect internal data (not from external data) from system services. In real time
SELinux is disabled and instead of this IP tables are used. It protects all the services, files and directories by
default if SELinux is enabled.
2. In how many ways we can implement the SELinux? Explain them.
We can implement the SELinux mainly in 2 modes.
(i) Enabled
(ii) Disabled (default mode)
Enabled :
Enabled means enabling the SELinux policy and this mode of SELinux is divided into two parts.
(a) Enforcing
(b) Permissive
Disabled :
Disabled means disabling the SELinux policy.
3. What is Enforcing mode in SELinux?
Enforcing means SELinux is on. It checks SELinux policy and stored a log. No can access the services by
default but we can change the policy whenever we needed.
4. What is Permissive mode in SELinux?
Ratnakar Page 43
Abhisol : RED-HAT LINUX 6/7
SELinux is on and it don't check SELinux policy and stored the log. Everybody can access the services by
default and we can also change the SELinux policy. It is also called as debugging mode or troubleshooting
mode. In this mode SELinux policies and rules are applied to subjects and objects but actions are not affected.
5. What is Disabled mode in SELinux?
SELinux is turned off and no warning and log messages will be generated and stored.
6. What are Booleans?
Booleans are variables that can either be set as true or false. Booleans enhance the effect of SELinux
policies implemented by the System Administrators. A policy may protects certain deamons or services by
applying various access control rules.
7. What is SELinux policy?
The SELinux policy is the set of rules that guide the SELinux security engine. It defines types for file
objects and domains for process. It uses roles to limit the domains that can be entered and the user identities to
specify the role that can be attained.
8. What are the required files for SELinux?
# vim /etc/selinux/config -----> It is main file for SELinux.
# vim /etc/sysconfig/selinux -----> It is a link file to the above file.
# vim /var/log/audit/audit.log -----> SELinux log messages will be stored in this file.
9. what is the command to see the SELinux mode?
# getenforce (to check the SELinux mode)
10. What is command to set the SELinux mode temporarily?
# setenforce 0 or 1 (to set the SELinux mode. Where ' 0 ' -----> permissive and ' 1 ' ----->
Enforcing)
Note : (i) To change the SELinux mode from Permissive to Enforcing or Enforcing to Permissive
modes the system restart is not required.
(ii) To change Enforcing mode to Disabled mode or Disabled mode to Enforcing mode
the system restart is required.
(iii) The above commands are changed the SELinux mode temporarily only. To make the
selinux changes permanently then open /etc/selinux/config and go to ,
SELINUX=Enforcing or Permissive or Disabled (save and
exit this file)
11. What is command to see the SELinux policy details?
# sestatus (to see the SELinux policy details)
Other useful commands :
# ls -Z <file name> (to see the SELinux context of the file)
# ls -ldZ <directory name>(to see the SELinux context of the directory)
# ps -efZ | grep <process name> (to see the SELinux context of the process running on the system)
# ps -efZ | grep http (to see the SELinux context of the http process running on the
system)
# chcon -t <argument> <file/directory name> (to change SELinux context of the file or
directory)
# chcon -t public_content_t /public (to change the SELinux context of the /public
directory)
# chcon -R public_content_t /public (to change the SELinux context of the /public
directory and
its contents)
# restorecon -v <file/directory name> (to restore the previous SELinux context of the
file/directory)
# restorecon -v /public (to restore the previous SELinux context of that
directory)
# restorecon -Rv <directory> (to restore the previous SELinux context of the
directory and
its contents)
# restorecon -Rv /public (to restore the previous SELinux context
of the /public
directory and its contents)
# getsebool -a | grep <service name> (to see the booleans of the specified service)
# getsebool -a | grep ftp (to see the booleans of the ftp service)
Ratnakar Page 44
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 45
Abhisol : RED-HAT LINUX 6/7
Kernel initialises itself and loads the kernel modules and mounts the root file system as specified in the
"root=" in grub.conf and then kernel executes the /sbin/init program. Since init was the 1st program to be
executed by Linux kernel, it has the process ID (PID) of 1. We can see this id by # ps -ef | grep init
command. initrd stands for initial RAM Disk. initrd is used by kernel as temporary file system until kernel is
booted and the real root the file system is mounted. It also contains necessary drivers compiled inside
which helps it to access the hard drive partitions and other hardware.
init level :
In this init program reads the /etc/inittab file and put the system into specified run level. init
identifies the default run level from /etc/inittab file and we can change the this default run level
whenever we needed. We can find the default run level by # grep "initdefault" /etc/inittab command on
our system. Normally the
default run level in Linux is 3 in CLI (Command Line Interface) mode and 5 in GUI (Graphical User
Interface) mode.
Run Level Programs :
The following run levels are available in Linux systems.
0 -----> halt or shutdown the system
1 -----> Single user mode
2 -----> Multi user without NFS
3 -----> Full multi user mode but no GUI and only CLI mode
4 -----> Unused
5 -----> Full multi user mode with GUI (X11 system)
6 -----> reboot the system
Whenever we start the Linux system is booting we can see various services getting started. Those
services are located in different run levels programs executed from the run level directory as defined by
our default run level. Depending on our default init level setting, the system will execute the programs from
one of the following directories.
Run level 0 -----> /etc/rc.d/rc0.d
Run level 1 -----> /etc/rc.d/rc1.d
Run level 2 -----> /etc/rc.d/rc2.d
Run level 3 -----> /etc/rc.d/rc3.d
Run level 4 -----> /etc/rc.d/rc4.d
Run level 5 -----> /etc/rc.d/rc5.d
Run level 6 -----> /etc/rc.d/rc6.d
The above directories are also having symbolic links available for those directories under /etc/rc0.d,
/etc/rc1.d, ....etc., So, the /etc/rc0.d is linked to /etc/rc.d/rc0.d
Booting procedure in RHEL - 7:
Upto kernel the booting process is same as the above. /boot/grub2/grub.conf is the GRUB
configuration file in RHEL - 7. systemd is the initial process in RHEL - 7 and its process ID is 1.
linux16 read the root ( / ) file system and then initrd16process will mount the root ( / ) file system in
read & write mode and starts the systemdprocess. And the systemd process will read the /etc/fstab file and
mount all the file systems. Then it reads the file /etc/systemd/system/default.target file and brings
the system into the default run level according to the scripts the processes will start or stop.
2. How to check the current run level of the system?
# who -r (to see the present run level of the system)
3. How to change the default run level?
First open the /etc/inittab file by # vim /etc/inittab command and go to last line change the run
level number as we required and then reboot the system by # init 6 command. After rebooting the system
check the current run level by # who -r command.
4. How to start the graphical interface if the system is in run level 3 now?
# startx (it changes the run level 3 to 5 and reboots the system)
5. How to troubleshoot if the boot disk is not available?
(i) First check the hard disk is present in the system or not. If not present connect the hard disk
and restart the system.
(ii) If the hard disk is present, then go to BIOS and find the location of the hard disk.
(iii) Check the boot priority in the BIOS. If boot priority is not the hard disk then change it to hard disk
and restart the system.
Ratnakar Page 46
Abhisol : RED-HAT LINUX 6/7
(iv) Even though the system is not started then boot the system with CDROM in single user mode
and open the /boot/grub/grub.conf file and see the hard disk name and partition number.
Normally it should be /dev/hda1 (if the hard disk is IDE hard disk) or /dev/sda1 (if the
hard disk is SATA or SCSI). If the hard disk name and partition number is different instead of the
original then change them and reboot the system with hard disk.
(v)If the GRUB is corrupted then reboot the system with CDROM in single user mode and restore the
grub information from the recent backup and then restart the system with hard disk.
6. How to reboot the production server?
(i) In general the production servers will not be rebooted frequently because the end users will
suffer if the productions server are in down state. If any changes made to the system like grub,
selinux policy, default run level is changed and if kernel patches are applied the system reboot is
required.
(ii) If any inconsistency is root ( / ) file system, then take the business approval from higher
authorities,make a plan for proper scheduleand also inform to the different teams like
application team to stop the application, databaseteam to stop the databases, QC team to stop
the testing, monitoring people to ignore the alerts from thisserver and other teams if any and then
reboot the system withCDROM in single user mode and then run #fsck command on that file
system.
(iii) If O/S disk is corrupted or damaged then, reboot the system temporarily with the mirror
disk then fix that problem and again boot the system with original disk.
7. What is the difference between # reboot and # init 6 commands?
Both commands are used to restart or reboot the system.
# reboot command will not send the kill signals to the system and it will kill all the running processes
and services forcefully and then restart the system.
# init 6 command will send the kill signals to the system and it will stop all the processes and
services one by one and then restart the system.
8. What is console port and how to connect to the console port?
Console port is used to connect the system even though the system is not booted with the main O/S.
This port is used to connect the system for troubleshooting purpose only. We can connect the console
port as same as connect to systems LAN port and it is also having IP address, user name and password to
connect to the console.
There are different types of console ports for different types of servers. They are given below.
Server Name Name of the Console port Expansion name
DRAC ---> DELL Remote Access Controllers
DELL DRAC or i-DRAC i-DRAC ---> Integrated DELL Remote Access
Controllers
IBM Power series HMC Hardware Management Console
HP ILO Integrated Light Out
Ratnakar Page 47
Abhisol : RED-HAT LINUX 6/7
The procedure that takes place between two TCP/IP nodes to establish a connection. Known as the
Synchronization, Synchronize-Acknowledgement and Acknowledgement handshake.
For example if computer A transmits a Synchronize packet to computer B, which sends back a
Synchronize- Acknowledge packet to compute A. Computer A then transmits an Acknowledge packet to
computer B and the connection is established. This whole above said process is called the TCP
handshaking.
11. How many links will be created when we create the directory?
Whenever we create any directory there are two links will be created.
12. What are the differences between run level 2 and run level 3?
Run Level 2 :
(i) It supports multiuser operations.
(ii) Multiple users can access the system.
(iii) All the system deamons will run except NFS and some other network service related
deamons.
(iv) So, without NFS we can use all other services.
Run Level 3 :
(i) It is also supports Multi user operations.
(ii) Multiple users can access the system.
(iii) All the system deamons including NFS and other network related service deamons will run.
(iv) So, we can avail all the services including NFS also.
13. Server running in single user mode, can you login remotely and how?
We can login to the system remotely in single user mode also but it is possible to connect to console
instead of LAN port through putty tool by giving IP address, user name and password. Then console port
appears and boot the system with CDROM in single user mode.
14. How to check the present kernel version?
# uname -r (it displays the present kernel version)
# uname -a (it displays the present kernel version with other details)
# cat /boot/grub/grub.conf (in this file also we can find the kernel version)
15. What is the command to see the system architecture?
# arch or # uname -m (both commands gives the architecture of the system)
16. How to check the version of the O/S ?
# cat /etc/redhat-release (gives the version of the O/S)
17. How to repair the corrupted boot loader and recover it?
This problems may be occur if the GRUB is corrupted. So, we have to recover the GRUB. Basically the
repairing of GRUB means installing the new grub on the existing one from RHEL - 6 DVD. The steps are
given below.
(i) Insert the RHEL - 6 DVD and make sure that system should boot from CD/DVD.
(ii) Boot the system in Rescue Installed System mode.
(iii) Select the language with which we want to continue and click on OK.
(iv) Select the Keyboard type as US and click OK.
(v) Select Local CD/DVD and click OK.
(vi) Move the cursor to NO to ignore the Networking.
(vii) Move the cursor to Continue tab to mount the root ( / ) from CD/DVD and press Enter key.
(viii) Now the root ( / ) file system is mounted on /mnt/sysimage, here click on OK and Press Enter to
continue.
(ix) Select the "shell Start shell" option and click on OK, then shell will be displayed on screen.
(xi) At shell prompt type as # chroot /mnt/sysimage command, press Enter.
(xii) Check the /boot partition by # fdisk -l command.
(xiii) Install the new grub on the boot device ie., may be /dev/sda2 by # grub-install <device
name> command (For example #
grub-install /dev/sda2).
(xiv) If it show no error reported that means we have successfully recovered the grub.
(xv) Then type # exit command and again type # exit or # reboot command to reboot the
system.
18. What are Modules or Kernel Modules? How to find the Kernel Modules?
Ratnakar Page 48
Abhisol : RED-HAT LINUX 6/7
The drivers is Linux system are known as Modules or Kernel Modules. These modules are assigned by
kernel depending on the hardware. Hardware can only be communicated and can work efficiently when the
proper module is loaded in the kernel. we can find the kernel modules by # ls /etc/lib/modules command.
All the kernel modules in the system will be ended with " .ko " extension. So, we can see all the
modules in the system by # find / -name *.ko command.
19. What other commands related to kernel modules?
# lsmod (to list all the currently loaded modules)
# lsmod |grep -i <module name> (to check whether the particular module is loaded or not)
# lsmod |grep -i fat (to check the fat module is loaded or not)
There might be a situation where our module is not working properly, in that case we have to remove
that module and re-install it again by,
# modprobe -r <module name> (to remove the specified module)
# modprobe -r fat (to remove the fat module)
# modprobe <module name> (to install or re-install the module)
# modprobe fat (to install or re-install the module)
# modinfo <module name> (to see the specified module information)
# uname (to see the which O/S is present in the system)
# uname -s (to see which O/S kernel is this either Linux or
Unix)
# rpm -qa kernel --last (to see the kernel installation date and time)
# rpm -qa kernel* (to see how many kernels are there in the
system)
# ls /proc (to see the kernel processes
information)
# ls /boot (to see the present kernel version
created time) # ls /etc/lib/modules (installed kernel module
drivers)
# ls /usr/src (kernel source code location)
# kudzu (to scan the new hardware in RHEL - 4)
# depmod (to scan the new hardware from RHEL -
5, 6 and 7)
# rmmod <module name> (to remove the specified module)
# insmod <module name> (to install the kernel module without dependency
modules)
20. How to see the run level?
# who -r (to see the current run level)
21. How to block the USB / CDROM driver?
# lsmod |grep -i usb (to see the USB module is loaded or not)
# mount (to check the USB is mounted or not)
# modprobe -r usb_storage (remove the USB module, if it is mounted it will
not remove)
# umount /<mount point> (to unmount the USB if it is mounted)
# vim /etc/modprobe.d/blocklist.conf (it will open the blocklist.conf file, then put an
entry of USB)
blocklist usb_storage (after type this save and exit this file)
22. What is " wait " and where it is stored?
(i) If there is not enough memory to run the process, then it will wait for free space in memory.
That process is called wait.
(ii) wait is stored in buffer like cache memory.
23. What is run level?
(i) Run level is nothing but to put the system in different levels to perform different maintenance
modes.
(ii) There are 7 run levels. Those are 0, 1, 2, 3, 4, 5 and 6.
(iii) The above levels are used to put the system in different stages to avail different services.
24. What is the default run level?
(i) When we boot the server the system automatically go to one particular run level. That run
level is called the default run level.
Ratnakar Page 49
Abhisol : RED-HAT LINUX 6/7
7. Job Automation
1. What is Job scheduling?
The process of creating the jobs and make them occur on the system repeatedly hourly, daily, weekly,
monthly and yearly is called Job scheduling. In Linux and other Unix systems this process is handled by the cron
service or deamon called crondand atd is the at jobs deamon which can be used to schedule the tasks (also
called as jobs).
2. What is the importance of the job scheduling?
The importance of the job scheduling is that the critical tasks like backups, which the client usually
wants to be taken in nights, can easily performed without the intervention of the administrator by
scheduling a cron job. If the cron job is scheduled carefully then the backup will be taken at any given time of
the client and there will be no need for the administrator to remain back at nights to take the backup.
3. What are the differences between cron and at jobs?
cron job :
(i) cron jobs are scheduling jobs automatically at a particular time, day of the week, week of the
month and month of the year.
(ii) The job may be a file or file system.
(iii) We cannot get the information as a log file if the job was failed to execute ie., when it was
failed and where is was failed and also cannot execute automatically the failed jobs.
at job :
(i) at jobs are executes only once.
(ii) Here also we cannot get the information if the job is failed and it is also do not execute the
failed jobs automatically.
4. What are the important files related to cron and at jobs?
/etc/crontab -----> is the file which stores all the scheduled jobs.
/etc/cron.deny -----> is the file used to restrict the users from using cron jobs.
Ratnakar Page 50
Abhisol : RED-HAT LINUX 6/7
/etc/cron.allow -----> is used to allow only users whose names are mentioned in this file to use cron jobs
and this file does not exist by
default.
/etc/at.deny ----->same as cron.deny for restricting the users to use at jobs.
/etc/at.allow -----> same as cron.allow for allowing users to use at jobs.
5. What is the format of the cron job?
# crontab -e (to edit the cron job editor to create or remove the cron
jobs)
<minutes><hours><day of the month><month of the year><day of the week><job or script>
(0 - 59) (0 - 23) (1 - 31) (1 - 12 or jan, feb, ...) (0 - 6 or sun, mon, ...)
Options Explanation
* Is treated as a wild card. Meaning any possible value.
Is treated as ever 5 minutes, hours, days or months. Replacing he 5 with any numerical value will
*/ 5
change this option.
2, 4, 6 Treated as an OR, so if placed in the hours, this could mean at 2, 4 or 6 o-clock
Treats for any value between 9 and 17. So if placed in day of the month this would be days 9 through
9-17
17 or if put in hours, it would be between 9 AM and 5 PM.
(ii) Put the entries of the user names whom do we (ii) Put the entries of the user names whom do we
want to allow the cron jobs. want to deny the cron jobs.
Ratnakar Page 51
Abhisol : RED-HAT LINUX 6/7
(iv) Then open crontab editor by # crontab -e <user name> command and then put the entries
as below,
<minutes><hours><day of the month><month of the year><day of the week><script name
with path>
(v) Save and exit from the crontab editor.
13. How to add at job and delete the at job?
Adding :
(i) # at <time> (to enter the at job)
(ii) Before that open a file vim and enter the job commands in that file and save as xxxx.sh
(some name with extension must be as .sh)
(iii) Enter the above saved file name within the at job editor.
(iv) Press Ctrl + d to exit from the editor.
(v) Then system will assign a job id to that job. We can see the list of at jobs by # atq
command.
Delete :
(i) See the job id which job we want to delete by # atq command and note that job id.
(ii) Then delete that job by # at -r <job id> command.
14. How to know currently scheduled at jobs?
# atq (to see the currently scheduled at jobs)
15. How to allow or deny at jobs for a user?
For allow For deny
(i) Open /etc/at.allow file. (i) Open /etc/at.deny file.
(ii) Put the entries of the user names whom do we (ii) Put the entries of the user names whom do we
want to allow the at jobs. want to deny the at jobs.
Ratnakar Page 52
Abhisol : RED-HAT LINUX 6/7
# systemctl status crond (to see the status of the crond deamon in RHEL -
7)
# systemctl stop crond (to stop the crond deamon in RHEL - 7)
# systemctl start crond (to start the cron deamon in RHEL - 7)
# at -l (to see the list of at jobs)
# atq (to see the jobs in the queue)
# atrm <job id> (to remove the specified at job)
# at <time> (to set the at job to be executed at the specified
time)
# at 9:30 (to set the at job to be executed at 9:30
AM)
Example : # at 9:30
at> useradd gopal
at> groupadd manager
at> rm -rf /opt
at> tar -cvf /root/etc.tar /etc/*
press Ctrl + d to save and exit from at job
# at -r <job id> (to remove the specified job)
* at jobs can be performed only one time. It cannot repeat daily.
* at jobs once scheduled, we cannot edit the jobs or modify the time of the job.
# at now +5min (to execute the at job now after 5
minutes)
at> touch f1 f2 f3
at> mkdir /ram
at><EOT> or Ctrl + d (to save and exit from at job editor)
# tailf /var/log/cron (to see the last 10 lines of at or cron log file
contents)
# at Jan 20 2015 (to schedule the at job on 20th Jan
2015)
# at 5PM Jan 13 2015 (to schedule the at job on 13th Jan 2015 at 5PM)
# at noon + 4days (to schedule the at job today and after 4
days)
# at midnight (to schedule the at job today midnight)
# at midnight + 4days (to schedule the at job today midnight and after
4 days)
# vim /etc/at.deny (to deny the at jobs for specified users)
# vim /etc/at.allow (to allow the at jobs for specified users)
* If both /etc/at.deny and /etc/at.allow files are deleted, except root user every user will be deny
to
execute at
jobs.
* Once scheduled the cron jobs, we can modify, edit that job any no. of times.
# cat /etc/crontab (to see the cron jobs list)
# crontab -lu <user name> (to list all the cron jobs of the specified user)
# crontab -eu <user name> (to create or edit the cron jobs)
# crontab -ru <user name> (to erase or remove the specified user's cron
jobs)
# crontab -r <job id> (to remove the specified cron jobs)
# vim /etc/cron.deny (to deny the cron jobs for specified users)
# vim /etc/cron.allow (to allow the cron jobs for specified users)
* If both files are remove or deleted, except root user all the users are deny to execute the cronjobs.
# crontab -eu raju
55 14 20 1 2 /usr/sbin/useradd gopal; usr/sbin/groupadd
team
(save & exit this crontab)
* This job executes the useradd and groupadd commands on Tuesday 20th Jan every year
Examples for crontab :
Ratnakar Page 53
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 54
Abhisol : RED-HAT LINUX 6/7
(b) Data will be transferred in non-encrypted format. (b) Data will be transferred in encrypted format.
(c) We cannot trust this telnet connection. (c) We can trust this ssh connection.
(d) We cannot give the trusting in telnet. (d) We can give the trusting in ssh.
(e) By snooping or sniffing technologies we can see (e) By snooping or sniffing technologies we cannot
the data like system or hostname, login name, see the data like system name or hostname, login
password and other data. name, password and other data.
So, there is no security. So, there is a security
(f) # telnet<IP address of the remote system> (f) # ssh<IP address of the remote system>
(provide login name and password) (provide login name and password)
5. In how many ways we can connect the remote host through ssh?
Through ssh we can connect the remote host by two methods.
(i) Command Line Interface (CLI).
Ratnakar Page 55
Abhisol : RED-HAT LINUX 6/7
Example : # ssh <IP address of the remote system> (provide login name and password)
(ii) Graphical User Interface (GUI).
Example : open VNS server window and provide remote hostname, login name and
password.
6. What are the requirements for ssh?
(i) Remote systems IP address.
(ii) Remote systems user name and password
(iii) A proper network ie., our local and remote systems should be in the same network.
(iv) Open ssh package to configure the ssh.
7. In how many ways we can connect the remote system?
(i) telnet (ii) ssh
(iii) rlogin (iv) rcp
(v) ftp (vi) scp
(vii) sftp (viii) tftp
8. What is the syntax for ssh?
# ssh <IP address of the remote system> -l <user name>
# ssh <user name>@<IP address of the remote system>
# ssh <user name>@<remote hostname with fully qualified domain name>
* After executing any of the above commands, it may asks user name and password. Then type user
name and
passwords to connect the remote systems.
9. How to configure the ssh with keybased authentication or explain the ssh trusting?
(i) SSH keybased authentication is used to access the remote system without asking any
passwords.
(ii) For that, first we have to generate the public and private keys by executing # ssh-keygen
command on our system. Then the public and private keys are generated in /home/<user name>/.ssh
location. ie., .ssh directory in users home directory. And the keys are id_rsa (private key) and id_rsa.pub
(public key).
(iii) Then copy the public key id_rsa.pub on the remote system by executing the below command.
# ssh-copy-id -i <user name>@<IP address of the remote system>
(iv) Go to remote system and check whether the above key is copied or not by # cat
/home/<user name>/.ssh/authorized_keys file. And the private key should be in our system.
(v) Whenever we are trying to establish a connection the public key on remote system should be
matched with the private key on our system. otherwise there is no connection is established.
(vi) If both public and private keys are matched then connection will be established and first time it will
ask the password. Once the connection is established, next time onwards it won't ask any passwords.
# ssh <user name>@<remote hostname or IP address> (first time it will asks
the password)
(vii) The authentication is done through the public and private keys, so this type of authentication is
called keybased authentication.
10. How to prevent the remote login root user or how to configure the ssh to prevent the remote login
for root?
(i) The location of the ssh configuration file is /etc/ssh/sshd_config
(ii) Open the configuration file by # vim /etc/ssh/sshd_config
-----> go to line no. 42 (in RHEL - 6) or
-----> go to line no. 48 (in RHEL - 7) PermitRootLogin yes
and uncomment that line and type as " no " in place of " yes " andsave and exit this file.
(iii) Then restart the or reload the sshd deamon by
# service sshd restart (to restart the sshd deamon or service
in RHEL - 6)
# systemctl restart sshd (to restart the sshd deamon or service in
RHEL - 7)
# chkconfig sshd on (to enable the sshd deamon at next
reboot in RHEL - 6)
# systemctl enable sshd (to enable the sshd deamon at next reboot in
RHEL - 7)
# service sshd reload (to reload the sshd deamon in RHEL - 6)
Ratnakar Page 56
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 57
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 58
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 59
Abhisol : RED-HAT LINUX 6/7
rsync is a very good program for backing up or mirroring a directory tree of files from one machine
to another machine and for keeping the two machines " in sync ". It is designed to speedup file transfer by
coping the differences between two files rather than coping an entire file every time.
If rsync is combined with ssh, it makes a great utility to sync the data securely otherwise by sniffing
any one can see our data ie., no security for our data.
21. A system is able to ping locally but not out site. Why?
(i) May be there is no access to outside.
(ii) May be outside is in different network from the local.
(iii) May be permission is denied for that system to access outside.
(iv) If there is access, but router or modem or network switch or NIC may not be working to
access the outside.
(v) May be outside is not available temporarily.
22. A system is echoing the ping, but not able to login via telnet. Why?
(i) Check telnet service is started or not. If not started, start the telnet service.
(ii) May be telnet service is disabled, if so, enable the telnet service.
(iii) May be telnet port is blocked, if so, release that port no.
(iv) May be telnet permission is denied, if so, change the permissions to allow the telnet service.
(v) Check all the files whether the telnet service is blocked or not, if blocked remove those entries.
23. How will you login or start the system in what mode if you don't know the root password?
(i) If the user having sudo permissions, then login as sudo user.
(ii) If no sudo permissions then boot with CDROM in single user mode and start the system. Then
provide the root password to root user if there is no root password.
(iii) Even though if it is not possible then finally break the root password.
Other useful commands :
# telnet <IP address or hostname> (to connect the specified remote system through
telnet)
# ssh <IP address or hostname> (to connect the specified remote system through
ssh)
Username : xxxxxx
Password : xxxxxxx
# ssh <IP address> -l <user name> (to connect the remote system using user name)
Password : xxxxxxx
# ssh 192.168.1.1 -l root (to connect this remote system as root
user)
# ssh [email protected] (to connect this remote system as root user)
# ssh [email protected] (to connect the server1 system in example.com
domain)
#w (to see all the users who are login to our system)
# w -f (to see all the users who are login to our system with
other details)
# ssh <IP address> (if we not specified the user name, then it will ask the current users
password and search the current account
in remote system)
# cat /root/.ssh/known_hosts (to see the ssh trusting remote hosts finger print
information)
# ssh [email protected] <command> (to run a command on remote host without login to that
system)
# ssh [email protected] -X (to run GUI commands on the remote system because by default
the ssh is configured as command line
interface, X is capital)
# lastb (to see the login failed tries)
# last -x |grep shutdown (to see the date & time of the system's last
shutdown)
9. Memory Management (Swap)
1. What is swap?
Ratnakar Page 60
Abhisol : RED-HAT LINUX 6/7
Swap space in Linux is used when the amount of the Physical memory (RAM) is full. If the system
needs more memory resources and the RAM is full, inactive pages in the memory are moved from RAM to swap
space. It helps the machines which are having small amount RAM and it should not be considered a
replacement for more RAM. Swap is located on the hard disks which have slower access time than Physical
memory.
2. What is the recommended swap space?
Generally the recommended swap space is double the RAM size, but the following table shows actual
amount.
Apart from the below recommendation a basic rule is applied to create the swap partition.
* If the RAM size is less than or equal to 2 GB, then the size of the swap = 2 X RAM size.
* If the RAM size is more than 2 GB, then the size of the swap = 2 GB + RAM size.
Recommended Amount of Swap
Amount of RAM in the System
Space
4 GB or less Min. 2 GB
4 GB - 16 GB Min. 4 GB
16 GB - 64 GB Min. 8 GB
64 GB - 256 GB Min. 16 GB
256 GB - 512 GB Min. 32 GB
Ratnakar Page 61
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 62
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 63
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 64
Abhisol : RED-HAT LINUX 6/7
# rpm -qip <package full name> (to display the package information before
installation)
# which <command name> (to display the location of that command)
# rpm -qf <location of the command> (to check the package name for that command)
# rpm -V <package name> (to verify that package, ie., 100% package is
there or not, if any files missed in that package, those
are displayed as a list)
# rpm -ivh <package name> --replacepkgs (to replace the missed files in that package)
# rpm -qp --changelog <package name> (displays all the changed logs like lat time, when
the package
is installed, .....etc.,)
# rpm -qp --scripts <package name> (to see the package installation scripts)
# rpm -K <package full name> (to see the package key)
# rpm -Uvh <package name> (to update the package)
* Update is over write the old version of the package. If any problems in new package, we cannot
solve those issues. So, the better one is install that package as a fresh one (not update option).
* Update will look first the package is available in that system or not. If it is available, it will update
that package otherwise it will install as fresh package.
# rpm -qRp <package name> (to check the dependency packages of that package
before install)
# rpm -ivh <package name> --nodeps (to install the package without dependent
packages)
8. What is yum and explain the yum?
yum stands for yellow dog updater modified. yum is a package management application for
computers running on Linux O/S.yum is a standard method of managing the installation and removal of
software. It is from RHEL - 5 onwards. Packages are downloaded from collections called repositories, which may
be online, on a network and or on installation media. yum is a front end tool for rpm. It is used to resolve the
dependency which cannot be done by rpm. The yum command has access the repository where the packages
are available and can install, update/upgrade, remove and query the packages automatically.
9. What are the important files that are related to yum?
/etc/yum.conf -----> is the yum configuration file.
/etc/yum.repos.d -----> is the directory which contains the yum repository configuration file.
/etc/yum.repos.d/xxxxx.repo ------> is the yum repository configuration file.
/var/lib/yum -----> is the directory which contains the yum databases.
/var/log/yum.log -----> is the file which stores the yum log messages.
10. How setup the yum server?
(i) Insert the RHEL DVD, goto that directory and install the vsftpd package by # rpm -ivh
vsftpd*
(ii) Goto /var/ftp/pub directory and create rhel6 directory by # mkdir rhel6
(iii) Goto DVD mounted directory and copy all the DVD content into /var/ftp/pub/rhel
directory by
# cp -rvpf /media/DVD/ /var/ftp/pub/rhel6
(iv) Restart the vsftpd service by # service vsftpd restart command.
(v) Then enable the vsftpd service by # chkconfig vsftpd on command.
(vi) Goto /etc/yum.repos.d directory and create one yum repository file by # vim linux.repo
command.
(vii) In the above file the contents are as below,
[linux] (Linux repo id)
name=yum repo server (yum server name)
baseurl=file:///var/ftp/pub/rhel6 or baseurl=ftp://<IP address of the
system>/pub/rhel6
gpgcheck=0 (0 means while installing it will not ask
any signature keys of yum packages, If it is 1, then it will ask the signature keys
while installing the packages)
enabled=1 (if multiple repositories are there, then
enable this only)
(save and exit this file)
Ratnakar Page 65
Abhisol : RED-HAT LINUX 6/7
(viii) # yum clean all (to clean the old one update the new repository)
(ix) # yum repolist (it displays no. of packages in that
repository)
11. How to setup the yum client?
(i) Goto /etc/yum.repos.d directory and create the repository file by # vim linux.repo
(ii) Type the entries as below,
[linux] (Linux repo id)
name=yum repo client (yum repo client)
baseurl=ftp or http://<IP address of the server>/pub/rhel6
gpgcheck=0 (0 means while installing it will not ask
any signature keys of yum packages, If it is 1, then it will ask the signature keys
while installing the packages)
enabled=1 (if multiple repositories are there, then enable this only)
(save and exit)
(iii) # yum clean all (to clean the old one update the new repository)
(iv) # yum repolist (it displays no. of packages in that
repository)
12. How to configure the yum repository to deny some packages to be installed?
(i) To configure the yum tool the yum configuration file is /etc/yum.conf
(ii) To deny some packages, open the yum configuration file by # vim /etc/yum.conf
command.
(iii) Gotolast line and type as, exclude=*(all) kernel* ftp* then save and exit this file.
(iv) Then kernel* and ftp* packages will be denied when we trying to install those packages.
13. How to change the yum repository default location?
(i) Open yum configuration file by # vim /etc/yum.conf command.
(ii) Goto last line and type as, repository=<yum repository new location with full path> then
save and exit this file.
(iii) Then the yum repository new location will be changed from old one new one.
14. How to change the yum log file default location?
(i) Open the yum configuration file by # vim /etc/yum.conf command.
(ii) Goto last line and type as, log=<yum log file new location with full path> then save and
exit this file.
(iii) Then the default log location is changed from /var/log/yum.log file to new location.
15. How to configure the yum to restrict the architecture (64 bit or 32 bit) etc.,?
(i) Open the yum configuration file by # vim /etc/yum.conf command.
(ii) Goto last line and type as, exactarch=1 then save and exit this file.
(iii) 1 means first it installs 64 bit packages and if it is 0 then 32 bit packages will be installed.
(iv) Open the yum configuration file by # vim /etc/yum.conf command.
(v) Goto last line and type as, cachedir=<download new location> then save and exit this
file.
(vi) Then whenever we install the packages the downloaded location will be the new location.
(vii) Open the yum configuration file by # vim /etc/yum.conf command.
(viii) Goto last line and type as, assumeyes=1 then save and exit this file.
(ix) Whenever we install any package using yum then no need to mention -y option if assumeyes=1
and if assumeyes=0 then we have to mention -y option when we install the package.
16. What is O/S patch and how to add those patches on production servers or how to upgrade the
kernel?
(i) O/S patch is nothing but update the new kernel. Normally O/S patch is software that contains
some programs to fix the bugs in O/S ie., in kernel.
(ii) If our server is registered and configured in RedHat network, then we will get the information
about that updated kernel s information and then download that kernel updations.
(iii) Every O/S patch is supplied with a document about pre-requisites to apply that patch.
(iv) Check the pre-requisites, space requirements and others. if all are ok,
(v) Then we take the business approval and make CRQ's (Change requests).
(vi) Then the project manager will initiate the mail thread ie., sending the mail or messages to various
teams who are dealing with that server.
(vii) We get the response from different teams which are involving in this process.
Ratnakar Page 66
Abhisol : RED-HAT LINUX 6/7
(a) For example Monitoring team to ignore alerts from that server if the system hangs or
rebooted.
(b) DBA team if database stopped or crashed or system failed.
(c) Application team if the application effects while patching.
(viii) If the server is in cluster, then move the service group and resources to another systems manually
called switch over.
(ix) Inform the Application team to stop the application and database team to stop the
database.
(x) If the server is in cluster there is no need of reboot (no down time) else down time needed to
reboot.
(xi) Check the root disk is in normal file system or VxVM.
(xii) If mirror disk is there, split the mirror disk from original disk and boot in single user mode and add
the patch by # rpm -ivh <patch name> command.
(xiii) Then reboot the system and won't attach the mirror disk to avoid any unexpected situations or
problems and put that server under test upto 1week or 10 days depending on the company's policy.
(xiv) After the test period, if there is no problems raised then attach the system in live mode and also
with mirror disk to sync the data to update the system.
(xv) Then we inform the Application, Database, Monitoring and other teams who are dealing with
that server to test application, database, monitoring and others see the status.
(xvi) Then finally close the issue or CRQ.
17. After installation of package or patch if the package or patch is removed then what will happened?
(i) If kernel patch is removed, then the system will hang and for others there is no effect.
(ii) If package is removed then the application that belongs to that removed package will effect.
18. After applying the patch need to reboot the system or not?
(i) If the patch is kernel patch or clustered patch then only the system reboot is required.
(ii) If the patch is normal patch then there is no need of the reboot required.
19. If the package is not installing. How to troubleshoot?
(i) Check the package pre-requisites to install the package.
(ii) If pre-requisites are not matched with our system, then the package will not be installed i.e.,
O/S compatibility to install that package.
(iii) If there is no sufficient space in the system, the package will not be installed.
(iv) If the package is not properly downloaded, then the package will not be installed.
20. If the patch is not applied successfully what will you do?
(i) Check whether the patch is installed properly or not by # rpm -qa <patch name>
command.
(ii) Check the /var/log/yum.log file to verify or see why the patch is not successfully installed.
(iii) If any possible to resolved those issues, resolve and remove that patch with # rpm -e
<patch name> command.
(iv) If any reboots required to effect, then reboot the system.
(v) Again add that patch by # rpm -ivh <patch name> command.
(vi) Then check the patch by # rpm -qa <patch name> command
Other useful yum commands :
# yum repoinfo (to list all the information on all the repositories)
# yum repoinfo <repo id> (to list all the information on specified
repository)
# yum install <package name> -y (to download and install the package and y
means yes)
# yum install <package name> -d (to download the package)
# yum erase or remove <package name> -y (to remove or uninstall the package and y
means yes)
# yum list installed (to display the list of all installed
packages)
# yum list available (to list all the available packages to be installed)
# yum list all | less (to list all the installed and not installed
packages)
# yum search <package name> (to search a particular package is available or
not)
Ratnakar Page 67
Abhisol : RED-HAT LINUX 6/7
# yum info <package name> (to display the information on that package)
# yum update <package name> (if the update version of the specified package is
available, then
update that package)
# yum update all (to update all the packages nothing but whole system
will be updated)
# yum downgrade <package name>(to revert back ie., go back to previous version of that package if
new version is not working
properly)
# yum history (to display the yum history)
# yum history info < id > (to display the information of that history id)
# yum history undo < id > (to remove that history id)
# yum history undo < id > (to redo the above removed history id)
# yum grouplist (to display the list of group packages)
# yum groupinstall <package name> (to install the group package)
# yum install@<group package name> (to install the group package in another way)
# yum groupinfo <group package name> (to display the group package information)
# yum grouplist hidden (to list all the group packages names including
installed or not installed and
hidden group packages)
# yum-config-manager disablerepo=<repo id> (to disable the yum repository. So, we cannot
install any package
from the repository)
# yum clean all (to clear the history, if we disable the repository id, then we have to clean the
history, then only it will
disable the repository)
# yumdownloader <package name> (to download the package from the repository,
and the downloaded location is the present
working directory)
# man yum.conf (to see the manual pages on yum configuration
file)
# yum-config-manager --add-repo=http://content.example.com/rhel7.0/x86_64/dvd (then the
yum repository will be created automatically with .repo file also. And this works
only in RHEL - 7)
# subscription-manager register --username=<user name> --password=<password> (to register
our product with RHN--Redhat Network. The user name and passwords will be provided by the Redhat
when we purchase the software)
# subscription-manager unregister --username=<user name> --password=<password> (to
unregister our product with RHN--Redhat Network. The user name and passwords will be provided by the
Redhat when we purchase the software)
Ratnakar Page 68
Abhisol : RED-HAT LINUX 6/7
Examples:
# tar -cvf /root/etc.tar /etc/* (to copy all the files and directories from /etc and make
a single file and place in
the /root/etc.tar file)
# tar -tvf /root/etc/tar (to long listing the contents of the /root/etc.tar
file)
# tar -xvf /root/etc.tar -C /root1/ (to extract and copy the files in /root1/
location)
# tar -xf /root/etc.tar (to list the contents of the tar file)
# tar -f /root/etc.tar --update or -u <file name or directory> (to add the new contents to the
existing
tar file)
# tar -f /root/etc.tar --delete <file name or directory> (to delete the file from the tar)
# tar -u /root/etc.tar /var (to add the /var contents into the /root/etc.tar file)
Ratnakar Page 69
Abhisol : RED-HAT LINUX 6/7
# tar -cvf mytar.tar / --xattrs (to archive the contents along with SELinux and ACL
permissions)
# du -h /root/etc.tar (to see the size of the tar compressed file)
5. What are the compressing & uncompressing tools available for tar and explain them?
Compressing Tools Uncompressing Tools
# gzip (.gz) # gunzip
# bzip2 (.bz2) # bunzip2
# xz (RHEL - 7) # unxz
# gzip <tar file name> (to compress the size of the tar file and the output file
is .tar.gz)
# gunzip < .gz compressed file name> (to uncompress the compressed tar file and the output
is .tar only)
# bzip2 <tar file name> (to compress the size of the tar file and the output
is .tar.bz2)
# bunzip2 < .bz2 compressed file name> (to uncompress the compressed file and the output
is .tar only)
6. What is scp, rsyncand how to use it?
scp means secure copy. ie., ssh + cp = scp which is used to copy the files/directories into remote
system.
scp will copy files/directories into remote system blindly ie., if the file already exits, it will over write
that file.
So, scp will take more time to copy when compared to # rsync tool.
# scp <file name><user name>@ <IP address of the remote system>:<location to be copied>
# scp anaconda* [email protected]:/root (to copy anaconda file into /root of the remote system)
# scp -r /etc/ [email protected]:/raju (to copy /etc/ directory into /raju of remote
system)
#scp -av /raju [email protected]:/root (to copy /raju into /root of the remote system)
# scp -r [email protected] :/etc /home (to copy /etc of the remote system into /home of the
local
system)
rsync is also used to copy files/directories into remote systems. rsync tool will compare the new files
or directories and copy only the changed or modified contents of the files into remote system. So, it takes less
time to copy when compared to # scp tool.
# rsync -av [email protected]:/etc /home (to copy /etc directory changed contents into
/home)
rsync options are, -a -----> all (copy the file with all permissions except SELinux and ACL
permissions)
-aA -----> synchronize ACL permissions
-aAx ----> synchronize ACL and SELinux permissions also.
7. What is cpio and how to take a backup and restore using cpio?
cpio means copy input and output. It supports any size of the file system. It skips the bad blocks also.
Syntax of cpio with full options :
# ls <source file name> |cpio <options>><destination file name> (to take a backup of the source
directory and stored the backup into
destination directory)
The options are, -t -----> to list the cpio contents
-i -----> to restore the cpio backup
-v -----> to display on the screen ie., verbose
-o -----> to take a backup
Examples :
# ls | cpio -ov > /opt/root.cpio (to take a backup of root directory and stored
in /opt )
# cpio -iv < /opt/root.cpio (to restore the backup)
# ls /etc | cpio -ov > /opt/etc.cpio (to take a backup of the /etc directory and
stored in /opt)
# cd /etc (go to that /etc directory)
# rm -rf * (to remove all the contents from /etc)
Ratnakar Page 70
Abhisol : RED-HAT LINUX 6/7
# cpio -iv < /opt/etc.cpio (to restore the /etc contents from the cpio
backup)
8. What is dd and how to take a backup and restore using dd?
dd means disk to disk backup. Using dd command we can take a backup of the data from one
disk to another disk. It copies the data in byte to byte. It can take a backup of the disk including bad blocks.
# dd if = <disk 1> of = <disk 2> (to take a backup from disk 1 and stores in disk
2)
# dd if = /dev/zero of = /root/raju bs = 1M count = 2048 (to create an empty file with
2GB size)
# dd if = /dev/sda of = /root/mbr.bak bs = 1 count = 512 (to take the backup of
/dev/sda Master
Boot Record)
# dd if = /root/mbr.bak of = /dev/sdb (to restore the MBR from backup to second
disk /dev/sdb)
# dd if = /dev/sda1 of = /dev/sdb1 (to take a backup of the entire /dev/sda1 disk
partition)
# dd if = /dev/sdb1 of = /dev/sda1 (to restore the /dev/sda1 contents from the
above backup)
# dd if = /dev/sda of = /dev/sdb (to take a backup of the entire /dev/sda disk
into /dev/sdb)
# dd if = /dev/cdrom of = /root/rhel6.iso (to create a ISO image file of the CD/DVD)
9. What is dump and how to take a backup and restore using dump and restore?
dump is a command used to take a backup of file systems only. We cannot take a backup of files and
directories. We cannot take a backup of disk to disk backup. It is not recommended to take a backup on
mounted file systems. So, unmount the file system and then take a backup is recommended. By default dump is
not available in the system. so, first install the dump package and then execute the dump commands.
# yum install dump* -y (to install the dump package)
The syntax for dump :
# dump <options><destination file name><source file name>(to take a backup of the file systems)
The options are, -0----->full backup
-(1 - 9) -----> incremental backups
-u -----> update the /etc/dumpdates file after successful dump
-v -----> verbose
-f ----->make the backup in a file
-e -----> exclude inode number while backing up
# dump -0uvf /opt/full.dump /coss (to take a full backup of the /coss file system and copied
it in /opt)
# dump -1uvf /opt/full.dump /coss (to take a backup modified files from the last full backup
nothing but
incremental backup)
# dump -2uvf /opt/full.dump /coss (to take a backup modified files from the last
incremental level -1
backup)
The syntax for restore :
# restore <options><dump backup file> (to restore the backup contents if that data is lost)
The options are, -f -----> used to specify the dump or backup file
-C -----> used to compare the dump file with original file
-v -----> verbose
-e -----> exclude the inode number
-i -----> restore in interactive mode
The commands in interactive mode are,
restore> ls -----> list the files and directories in the backup file
restore> add ----> add the files from dump file to current working directory
restore> cd -----> change the directory
restore> pwd ---> displays the present working directory
restore> extract ----> extract the files from the dump file
restore> quit ---> to quit from the interactive mode
Ratnakar Page 71
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 72
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 73
Abhisol : RED-HAT LINUX 6/7
(c) # service command is used to start or stop the (c) # systemctl is the command to start or stop
services temporarily and # chkconfig is used the services temporarily or next booting
to start or stop the services at next booting time. time.
(c) It will take more time to the system and (c) It will take less time to start the system and
services. services when compared to RHEL - 6.
(d) It will start the services one by one. (d) It will start the services parallel not one by one.
Ratnakar Page 74
Abhisol : RED-HAT LINUX 6/7
# top (It will show a dynamic real-time view of a running system. ie., a summary of
processes or threads currently managed
by the Linux kernel)
# kill (It sends the specified signal to the specified process or process group)
# pkill (It will send the specified signal to each process instead of listing them on standard
output)
# pstree (to show all the running processes as a tree structure. The tree is rooted either pid
or init)
# nice (to run a program with modified scheduling priority ie., it runs the process with an
adjustable niceness)
# renice (to alter the scheduling priority of one or more running processes)
# pgrep (to list the process id's which matches with the pgrep argument)
RHEL - 6 commands :
# service <service name> status (to check the status of the service)
# service <service name> start (to start the service)
# service <service name> stop (to stop the service)
# service <service name> reload (to reload the service)
# service <service name> restart (to restart the service)
* These above commands will change the service statuses temporarily. So if we want to change
statuses of the
process automatically from next boot onwards we have to enable those services as given below.
# chkconfig --list (to check the availability of the services in
different run levels)
# chkconfig --list <service name> (to check the availability of the service in
different run levels)
# chkconfig <service name> on (to make the service available after restart)
# chkconfig <service name> off (to make the service unavailable after next boot)
# chkconfig --level <1-6><service name><on/off> (to make the service available or unavailable on
the
particular run level)
# chkconfig --level 5 vsftpd on (to make the vsftpd service available on run level 5)
# chkconfig --level 345 vsftpd on (to make the vsftpd service available on run levels 3, 4
and 5)
RHEL - 7 commands :
# systemctl status <service name> (to check the status of the service)
# systemctl start <service name> (to start the service)
# systemctl stop <service name>(to stop the service)
# systemctl reload <service name> (to reload the service)
# systemctl restart <service name> (to restart the service)
* These above commands will change the service statuses temporarily. So if we want to change
statuses of the
process automatically from next boot onwards we have to enable those services as given below.
# systemctl enable <service name> (to make the service available at next boot)
# systemctl disable <service name> (to make the service unavailable at next boot)
# grep <string name><file name> (to display the specified string in that file)
# grep -n <string name><file name> (to display the string with line no's)
# grep -e <string name 1> -e <string 2><file name> (to display 2 or multiple strings in that
file)
# grep -o <string name><file name> (to display only that string in that file not whole the text
of that file)
# grep -v <string name><file name> (to display all the strings except the specified one)
# grep ^ this coss (to display the line which is starting with the
specified string)
Ratnakar Page 75
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 76
Abhisol : RED-HAT LINUX 6/7
Automatic processes are not connected to a terminal and these are queued into a spooler area where
they wait to be executed on a FIFO (First In - First Out) basis. Such tasks can be executed using one of two
criteria.
At certain date and time : done using the "at" command.
When the total system load is low enough to accept extra jobs : done using the " cron " command. By
default tasks are put in a queue where they wait to be executed until the system load is lower than 0.8 and
cron job processing is also used for optimizing system performance.
3. What is parent process?
The process which starts or creates another process is called the parent process. Every process will
be having a parent process except initd process. The initd process is the parent process to all the remaining
processes in
Linux system because it is the first process which gets started by the kernel at the time of booting and
it's PID
is 1. Only after initd process gets started, the remaining processes are called by it, and hence it is
responsible for all the remaining processes in the system. The parent process is identified by PPID (parent
process ID).
4. What is child process?
A process which started or created by the parent process is called child process and it is identified by
PID.
Useful # ps commands :
# ps -a (it displays all the terminals processes information)
# ps -au (it displays all the terminals processes information with
user names)
# ps -aux (it displays all the terminals processes information
including background
processes with user names)
* ? (question mark) if it is appeared at tty column, it indicates that is a background process.
# ps -ef (it displays the total processes information with parent
process ID (PPID))
# ps -P <process id> (it displays the process name if we know the process ID (pid))
# pidof<process name> (to see the process ID of the specified process)
# pidof initd (to see the process ID of the initd process)
# pstree (to display the parent and child processes structure in tree
format)
# ps -u <user name> (to display all the processes of the specified user)
# ps -u raju (to display all the processes of the user raju)
# ps -G <group name> (to display all the processes that are running by a particular group)
# ps -o pid, comm, %mem, %cpu(to display process id, command, %memory and %cpu
utilization nothing
but filtering the output)
# ps -Ao pid, comm, %mem, %cpu (to display the same information as above but including
some more
information)
# ps -o pid, comm, %mem, %cpu |sort -k <no.> -r |head -n 10 (to display which process is
utilizingmore
memory or cpu in reverse order where -k means field, <no.> means field no. and -r reverse order)
# ps -o pid, comm, %mem, %cpu |sort -k 3 -r |head -n 10 (to display the process which
occupies more memory and cpu
utilization in reverse order)
# ps -aux |grep firefox (to check whether the firefox is
running or not)
# pgrep -U <user name> (to display all the process
ID's only for that user)
* To communicate with the processes # kill and # pkill commands are used.
# kill -----> It will kill the processes using PID's.
# pkill -----> It will kill the processes using process names.
Ratnakar Page 77
Abhisol : RED-HAT LINUX 6/7
* We can also give some signals while using the above commands and we get the signals information
by
# kill -l command. This command will list all the signals with no's and there are 64 signals to pass.
5. What is signal in Process management?
Signals are a way of sending simple messages to processes. Most of these messages are already
defined and however signals can only be processed when the process is in user mode. Every signal has a unique
signal name. Each signal name is a macro which stands for a positive integer. Signals can be generated by the
process itself or they can be sent from one process to another. A variety of signals can be generated or
delivered and they have many uses for programmers.
6. What are the important signals in process management?
1. SIGHUP -----> to reload (read the configuration and load)
2. SIGINT -----> to interrupt from the keyboard (nothing but Ctrl + c)
3. SIGQUIT -----> to quit the process from keyboard (nothing but Ctrl + l)
9. SIGKILL -----> to kill the process forcefully (nothing but unblockable)
15. SIGTERM -----> wait for completing the process and then terminate (terminate gracefully)
18. SIGCONT -----> to continue or resume the process if it is stopped
19. SIGSTOP -----> to terminate the process (If it is not stopped the process we cannot continue
or resume that process by Ctrl +
c or Ctrl + z)
20. SIGHTSTP ----> to stop the process (nothing but Ctrl + z)
* But the most commonly used signals are 1, 9, 15 and 20.
* The default signal is 15 (gracefully) when we not specified any signal.
# kill - <signal><process ID> (to kill the specified process using kill signal)
# kill -9 1291 (to kill the process which has the PID as 1291)
* If we not specified the signal no. then the default signal 15 will effect.
# kill 1291 (to kill the process 1291 with default signal)
# pkill -u <user name> (to kill all the processes of the specified user)
# pkill -u raju (to kill all the processes of the user raju)
# pkill -9 firefox (to kill the firefox process)
7. How many process states are there?
There are six process states and they are,
(i) Running process (the process which is in running state and is indicated by " r " ).
(ii) Sleeping process (the process which is in sleeping state and is indicated by " s " )
(iii) Waiting process (the process which is in waiting state and is indicated by " w " ).
(iv) Stopping process (the process which is in stopping state and is indicated by " T " ).
(v) Orphan process (the process which is running without parent process and is indicated by " o " ).
(vi) Zombie process (the process which is running without child process and is indicated by " Z " ).
8. What is Orphan process?
The processes which are running without parent processes are called Orphan processes. Sometimes
parent process closed without knowing the child processes. But the child processes are running at that time.
These child processes are called Orphan processes.
9. What is Zombie process?
When we start parent process, it will start some child processes. After some time the child processes
will died because of not knowing the parent processes. These parent processes (which are running without
child processes) are called Zambie processes. These are also called as defaunct processes.
10. How to set the priority for a process?
Processes priority means managing processor time. The processor or CPU will perform multiple tasks
at the same time. Sometimes we can have enough room to take on multiple projects and sometimes we can
only focus on one thing at a time. Other times something important pops up and we want to devote all of our
energy into solving that problem while putting less important tasks on the back burner.
In Linux we can set guidelines for the CPU to follow when it is looking at all the tasks it has to do.
These guidelines are called niceness or nice value. The Linux niceness scale goes from -20 to 19. The lower
the number the more priority that task gets. If the niceness value is higher number like 19 the task will be set
to the lowest priority and the CPU will process it whenever it gets a chance. The default nice value is 0 (zero).
By using this scale we can allocate our CPU resources more appropriately. Lower priority programs
that are not important can be set to a higher nice value, while the higher priority programs like deamons and
Ratnakar Page 78
Abhisol : RED-HAT LINUX 6/7
services can be set to receive more of the CPU's focus. We can even give a specific user a lower nice value for
all his/her processes so we can limit their ability to slow down the computer's core services.
There are two options to reduce/increase the value of a process. We can either do it using the nice
or renice commands.
Examples :
# nice -n <nice value range from -20 to 19><command> (to set a priority to a process before
starting it)
# nice -n 5 cat > raju (to set the medium priority to cat
command)
# ps -elf (to check the nice value for
that command)
* To reschedule the nice value of existing process, first check the PID of that process by # ps -elf
command
and then change the niceness of that command by # renice <nice value (-20 to 19)>< PID > command.
# renice 10 1560 (to reschedule the PID
1560)
11. What is top command and what it shows?
top is a command to see the processes states and statuses information continuously until we quit by
pressing " q ". By default top command will refresh the data for every 3 seconds.
When we need to see the running processes on our Linux in real time, the top command will be very
useful. Besides the running processes the top command also displays other information like free memory both
physical and swap.
The first line shows the current time, "up 1 day" shows how long the system has been up for, "3
user" how many users login, "load average : 0.01, 0.00, 0.23" the load average of the system 1, 5 and
15 minutes.
The second line shows the no of processes and their current states.
The third line shows CPU utilization details like % of the users processes, % of the system processes,
% of available CPU and % of CPU waiting time for I/O (input and output).
The fourth and fifth lines shows the total physical memory in the system, used physical memory,
free physical memory, buffered physical memory, the total swap memory in the system, used swap memory,
free swap memory and cached swap memory, ... etc.,
From sixth line onwards the fields are as follows.
PID Process ID
USER Owner of the process ie., which user executed that process
PR Dynamic Priority
NI Nice value, also known as base value
VIRT Virtual size of the task includes the size of processes executable binary
RES The size of RAM currently consumed by the task and not included the swap portion
SHR Shared memory area by two or more tasks
S Task Status
% CPU The % of CPU time dedicated to run the task and it is dynamically changed
% MEM The % of memory currently consumed by the task
TIME+ The total CPU time the task has been used since it started. + sign means it is displayed
with hundredth of a second granularity. By default, TIME/TIME+ does not account
the CPU time used by the task's dead children
COMMAND Showing program name or process name.
* While running the top command, just press the following keys woks and the output will be stored in
real time.
1 -----> 2nd CPU information Shift + > -----> Page up
h ----->Help Shift + < -----> Page down
Enter -----> Refresh immediately n -----> Number of tasks
k -----> Kill the process u -----> user processes
M -----> Sort by memory usage P -----> Sort by CPU usage
T -----> Sort by cumulative time z -----> Color display
r -----> To reschedule the priority by renice d -----> Change the delay time
(refresh time)
Ratnakar Page 79
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 80
Abhisol : RED-HAT LINUX 6/7
The sosreport command has a modular structure and allows the user to enable and disable modules
and specify module options via the command line. To list available modules (plug-ins) use the
following command:
# sosreport -l
To turn off a module include it in a comma-separated list of modules passed to the -n/–skip-
plugins option. For instance to disable both the kvmand amd modules:
# sosreport -n kvm,amd
Individual modules may provide additional options that may be specified via the -k option. For
example on Red Hat Enterprise Linux 5 installations the sos rpm module collects "rpm -Va" output by default.
As this may be time-consuming the behaviour may be disabled via:
# sosreport -k rpm.rpmva=off
16. What is the command to see the complete information on virtual memory?
# vmstat is the command to the complete information on virtual memory like no of processes,
memory usage, paging memory, block I/O (input /output), traps, disk and CPU activity.
# vmstat 2 10 (It will give the report for every 2 seconds upto 10 times)
The fields are, r -----> how many waiting processes
b -----> how many processes are busy
swapd -----> how much virtual memory used
free -----> how much memory is freely available
buffer -----> how much temporary memory using
caching -----> how much caching still using
swapin -----> how much data transferred from RAM to swap
swapout ---> how much data transferred from swap to RAM
bi -----> how much block input
bo -----> how much block output
system in ---> the no. of interrupts
system cs ---> the no. of contexts changed
# vmstat -a (to see the active and inactive processes)
# vmstat -d (to see the statistics of the disk used)
# cat /proc/meminfo (to see the present memory information)
17. What is the command to see the I/O statistics?
# iostat (to see the Input and Output statistics in the Linux system)
* This command is used to monitoring the system input and output statistics and processes
transfer rate.
* It is also used to monitor how many kilo bytes read per second and how many kilo bytes
read and write, shows CPU load average statistics since the last reboot in first line and most current
data is shown in the second line.
18. How many CPUs are there in the system?
# cat /proc/cpuinfo command will show no. of CPUs, no. of cores, no. of threads, no. of sockets
and the CPU architecture, ...etc., information.
# nproc command will give the no. of CPUs present in the system.
# lscpu command will give the information the architecture of the CPU (x86_64 or x86_32), no. of
cores, no. of threads, no. of sockets, cache memory sizes (L 1, L 2, L 3, ...etc) , CPU speed and the
vendor of the CPU.
19. How to send the processor into offline?
# ls -l /sys/devices/system/cpu is the command to see the no. of processors present in the
system.
# echo 0 > /sys/devices/system/cpu/cpu4/online is the command to send the CPU4 into offline.
# grep "processor" /proc/cpuinfo or # cat /sys/devices/system/cpu/offline are the command
to see the processor status whether offline.
20. How to send the processor into online?
# ls -l /sys/devices/system/cpu is the command to see the no. of processors present in the
system.
# echo 1 > /sys/devices/system/cpu/cpu4/online is the command to send the CPU4 into offline.
# grep "processor" /proc/cpuinfo or # cat /sys/devices/system/cpu/online are the command
to see the processor status whether online.
Ratnakar Page 81
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 82
Abhisol : RED-HAT LINUX 6/7
(v) # rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 (to import the gpg key if it ask
when executing the
above command in RHEL - 7)
(vi) # yum repolist (to check EPEL repolist)
(b) # cpulimit -p <PID> -l 10 (to see the CPU usage of that process and limit
the CPU usage to
10%)
(c) # cpulimit -e /usr/local/bin/myprog -l 20 (to limit the CPU usage of this command
to 20%)
27. How to capture the network traffic?
# tcpdump is the command to capture and analyze the network traffic. By using this command we
can also troubleshoot the network problems.
Examples :
# tcpdump (to capture and analyze the network traffic)
# tcpdump -i eth0 (to capture the network traffic from eth0 continuously and Ctrl +
c to exit)
# tcpdump -c 30 -i eth0 (to capture the network traffic from eth0 upto 30
packets only)
# tcpdump -w /root/tcp.pcap -i eth0 (to capture the network traffic from eth0 and
write that in
/root/tcp.pcap file)
# tcpdump -t t t -r /root/tcp.pcap (to read the contents of the above
captured file)
# tcpdump -i eth0 port 22 (to capture the network traffic from eth0 of
ssh traffic)
# tcpdump -i eth0 dst 172.25.0.11 and port 22 (to capture the network traffic from
172.25.0.11
system of ssh traffic)
28. What is SAR utility and how to use it?
SAR stands for System Activity Report. Using SAR we can check the information of CPU usage,
memory, swap, I/O, disk I/O, networking and paging. We can get the information of the present
status and post status (history using the data) upto last 7 days because HISTORY=7 is there in the
configuration file. The log messages are stored in /var/log/sa/sa1, /var/log/sa/sa2,
/var/log/sa/sa3, ....etc., (where 1, 2, 3, ....etc., are dates). The SAR configuration is stored in
/etc/sysconfig/sysstat file. In this file the HISTORY=7 default option will be there. So, we can
change the default 7 days to our required value.
Before using the SAR utility first we should install the SAR utility package by # yum install sysstat*
-y command.
Examples :
# sar 2 10 (It will give the system report for every 2 seconds upto
10 times)
# sar -p 2 10 (to see the CPU utilization for every 2 seconds upto
10 times)
# sar -p ALL -f /var/log/sa/sa25 (to check the CPU utilization on 25th day of the
current month)
# sar -p ALL -f /var/log/sa/sa10 -s 07:00:00 -e 15:00:00 (to check the CPU utilization on
10th day of the current month from 7:00 to 15:00 hrs. where -s means
start time -e end time)
# sar -r 2 10 (to see the memory utilization for every 2 seconds upto
10 times)
# sar -r -f /var/log/sa/sa14 (to check the memory utilization on 14th day of the
current month)
# sar -r -f /var/log/sa/sa10 -s 07:00:00 -e 15:00:00 (to check the memory utilization on
10th day of the current month from 7:00 to 15:00 hrs. where -s means
start time -e end time)
Ratnakar Page 83
Abhisol : RED-HAT LINUX 6/7
# sar -S 2 10 (to see the swap utilization for every 2 seconds upto
10 times)
# sar -S -f /var/log/sa/sa25 (to check the swap utilization on 25th day of the
current month)
# sar -S -f /var/log/sa/sa10 -s 07:00:00 -e 15:00:00 (to check the swap utilization on 10th
day of the current month from 7:00 to 15:00 hrs. where -s means
start time -e end time)
# sar -q 2 10 (to see the load average for every 2 seconds upto
10 times)
# sar -q -f /var/log/sa/sa14 (to check the load average on 14th day of the
current month)
# sar -q -f /var/log/sa/sa10 -s 07:00:00 -e 15:00:00 (to check the load average on 10th day
of the current month from 7:00 to 15:00 hrs. where -s means start
time -e end time)
# sar -B 2 10 (to see the paging information for every 2 seconds upto
10 times)
# sar -d 2 10 (to see the disk usage for every 2 seconds upto
10 times)
# sar -m 2 10 (to see the power management for every 2 seconds upto
10 times)
# sar -b 2 10 (to see the disk input and output statistics for every 2 seconds upto
10 times)
29. What are the port no. for different services?
The Port no. list :
FTP (For data transfer) 20 HTTP 80
FTP (For connection) 21 POP3 110
SSH 22 NTP 123
Telnet 23 LDAP 389
Send Mail or Postfix 25 Log Server 514
DNS 53 HTTPS 443
DHCP (For Server) 67 LDAPS (LDAP + SSL) 636
DHCP (For Client) 68 NFS 2049
TFTP (Trivial File transfer) 69 Squid 3128
Samba shared name verification 137 Samba Data Transfer 138
Samba Connection Establishment 138 Samba Authentication 445
MySQL 3306 ISCSI 3260
* Ping is not used any port number. It is used ICMP (Internet Control Message Protocol) only.
Other useful commands :
# uptime (to see from how long the system is running and also gives the load average report)
* The load average is having 3 fields. 1 - present status, 2 - 5 minutes back and 3 - 15 minutes
back.
# iostat 5 2 (to monitor the input and output statistics for every 5 seconds upto
10 times)
# nproc (to check how many processors (CPUs) are there in
the system)
# top 1 (to see the no. processors (CPUs) are there in
the system)
# iptraf (to monitor the TCP or network traffic statistics in graphical
mode)
* Before using this command install the iptraf package by # yum install iptraf* -y command.
Ratnakar Page 84
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 85
Abhisol : RED-HAT LINUX 6/7
* When there are stopped jobs and want to exit from the terminal then, a warning message
will be displayed. If we try again to exit from the terminal,then the stopped or suspended jobs will
be killed automatically.
Ratnakar Page 86
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 87
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 88
Abhisol : RED-HAT LINUX 6/7
By default root user is blocked to access the FTP server. To allow the root user to access the FTP
server follow the below steps.
(i) Open the /etc/vsftpd/user_list file by # vim /etc/vsftpd/user_list command.
(ii) Go to root user line and comment on that line. For example # root (save and
exit the file)
(iii) Open the /etc/vsftpd/ftpuser file by # vim /etc/vsftpd/ftpuser command.
(iv) Go to root user line and comment on that line. For example # root (save and
exit the file)
(v) Restart the ftp deamon by # service vsftpd restart command in RHEL - 6 or
# systemctl restart vsftpd command in RHEL - 7.
* Even though we changed the above, the root user cannot access the FTP server because the
home directory context is not added. we can solve this as follows.
(vi) # getsebool -a | grep ftp (to check the SELinux Boolean of the root home
directory)
(vii) # setsebool -p ftp_home_dir on (to change the Boolean of the root
home directory)
* Now go to client system and try to login the FTP server as root user. Here we can access the FTP
server.
16. What are the difference between FTP and LFTP servers?
(i)The user name and password are required to access the FTP server but LFTP does not requires
passwords.
(ii) In ftp>prompt the " Tab " key will not work but in lftp> prompt the " Tab " key will work
as usual.
Other useful FTP Commands :
# ftp 172.25.9.11 (to access the FTP server provide FTP user
name and password)
ftp > ls (to see all the files and directories in FTP root
directory)
ftp > !ls (to see the local nothing present working
directory files)
ftp > pwd (to see the FTP present working
directory)
ftp > !pwd (to see the local file system's present
working directory)
ftp > get <file name> (to download the specified file)
ftp > mget <file 1><file 2><file3> (to download multiple files at a time)
ftp > cd /var/ftp/pub/upload (to move to upload directory)
ftp > put <file name> (to upload the specified file into the FTP upload
directory)
ftp > lcd /root/Downloads (to change to the local /root/Download
directory)
ftp > help (to get the help about FTP commands)
ftp > bye or quit (to quit or exit from the FTP server)
# lftp 172.25.9.11 (to access the LFTP server without
asking any passwords)
Ratnakar Page 89
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 90
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 91
Abhisol : RED-HAT LINUX 6/7
rw read/write permissions
ro read-only permissions
(vii) Export the above shared directory to the defined client systems by # exportfs -rv command.
(viii) Restart the NFS services by following the commands in RHEL - 6 and RHEL - 7.
# service rpcbind restart (to restart the rpcbind service
in RHEL - 6)
# service nfs restart (to restart the
NFS service in RHEL - 6)
# systemctl restart nfs-server (to restart the NFS service
in RHEL - 7)
(ix) Make the NFS service permanently boot at next boot time onwards as follows.
# chkconfig rpcbind on (to on the rpcbind service
in RHEL - 6)
# chkconfig nfs on (to on the nfs
service in RHEL - 6)
# systemctl enable nfs-server (to enable the
nfs-server in RHEL - 7)
(x) Export the NFS shared directory as follows.
# exportfs -rv
(xi) Enable the NFS service to the IP tables and Firewall in RHEL - 6 and RHEL - 7 as follows.
In RHEL - 6 :
(i) # setup
(a) Select Firewall Configuration.
(b) Select Customize ( Make sure firewall option remain selected ).
(c) Select NFS4 ( by pressing spacebar once ).
(d) Select Forward and press Enter.
(e) Select eth0 and Select Close button and press Enter.
(f) Select ok and press Enter.
(g) Select Yes and press Enter.
(h) Select Quit and press Enter.
(ii) Now open/etc/sysconfig/iptables file and add the following rules under the rule for port
2049 and save file.
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 32803 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 32769 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 875 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 875 -j ACCEPT
Ratnakar Page 92
Abhisol : RED-HAT LINUX 6/7
In RHEL - 7 :
# firewall-cmd --permanent -add-service=nfs (to enable the nfs
service at firewall)
# firewall-cmd --permanent -add-service=mountd (to enable
the mountd service at firewall)
# firewall-cmd --permanent -add-service=rpc-bind (to enable
the rpc-bind service at firewall)
# firewall-cmd --complete-reload (to reload the firewall)
8. What are requirements for NFS client?
(i) NFS server IP address or hostname.
(ii) Check the NFS shared name.
(iii) Create the local mount point.
(iv) Mount the NFS shared name on the local mount point.
(v) Go to mount point (local mount point) and access the NFS shared data.
9. How to access the NFS shared directory from the client?
(i) On Client system, install the nfs-utils package by # yum install nfs-utils* -y
command.
(ii) Check the exported NFS shared directory by # showmount -e <IP address or hostname
of the server>
Example : # showmount -e 172.25.9.11 or # showmount -e
server9.example.com
(iii) Create one mount point to mount the NFS shared directory by # mkdir /<mount point>
command.
Example : # mkdir /mnt/nfs
(iv) Mount the NFS shared directory on the above created mount point.
# mount <IP address or server hostname> : <NFS shared directory><mount
point>
Example : # mount 172.25.9.11:/public /mnt/nfs or
# mount server9.example.com:/public /mnt/nfs
* These are temporary mount only. ie., If the system is rebooted these are unmounted automatically
and we have to mount again after the system is rebooted.
(v) So, if we want to mount it permanently, then open /etc/fstab file and put an entry of the
mount point.
# vim /etc/fstab (to open the file)
<IP address or server hostname> : <shared name><mount point><file system>
defaults 0 0
Example : 172.25.9.11:/public /mnt/nfs nfs defaults 0 0 ( or )
server9.example.com:/public /mnt/nfs nfs defaults 0 0
(save and exit the file)
(vi) Mount all the mount points as mentioned in the above /etc/fstab file by # mount -a
command.
(vii) # df -hT command is used to check all the mounted partitions with file system types.
10. Why root user cannot create the files in the NFS shared directory and how to make him to create
the files?
Ratnakar Page 93
Abhisol : RED-HAT LINUX 6/7
The root user normally has all the permissions, but in NFS root user is also becomes as a normal
user. So, the root user having no permissions to create the files on the NFS shared directory.
The root user becomes as nfsnobodyuser and group also nfsnobody due to root_squash
permission is there by default. So, if we want to make the root user to create file on the NFS shared directory,
then go to server side and open the /etc/exports file and type as below,
<shared name> <domain name or systems names>(permissions, sync, no_root_squash)
Example : /public *.example.com(rw, sync, no_root_squash)
(save and exit the file)
# exportfs -rv (to export the shared directory)
# service nfs restart (to restart the NFS service in RHEL - 6)
# systemctl restart nfs-server (to restart the NFS service in RHEL - 7)
11. What are the disadvantages of the direct or manual mounting?
(i) Manual mounting means, we have to mount manually, so it creates so many problems. For
example if NFS service is not available then, # df -hT command will hang.
(ii) If the NFS server is down while booting the client, the client will not boot because it searches
for NFS mount point as an entry in /etc/fstab file.
(iii) Another disadvantage of manual mounting is it consumes more memory and CPU resources on
the client system.
So, to overcome the above problems normally indirect or automount is used using Autofs tool.
12. What is secure NFS server and explain it?
Secure NFS server means NFS server with Kerberos security. It is used to protect the NFS exports.
Kerbebors is a authentication tool to protect the NFS server shares. It uses the krb5p method to protect by
authentication mechanism and encrypt the data while communication.
For this one key file is required and this should be stored in each and every client which are accessing
the nfs secure directory. Then only Kerberos security will be available. This key file should be stored in
/etc/krb5.keytab file. For example the following command will download and store the keytab.
# wget http://classroom.example.com/pub/keytabs/server9.keytab -O /etc/krb5.keytab (where O
is capital)
13. How to configure the secure NFS server?
(i) Install the NFS package.
# yum install nfs* -y
(ii) Create a directory to share through NFS server.
# mkdir /securenfs
(iii) Modify the permissions of shared directory.
# chmod 777 /securenfs
(iv)Change the SELinux context of the directory if the SELinux is enabled.
# chcon -t public_content_t /securenfs
(v) Open the NFS configuration file and put an entry of the shared directory.
# vim /etc/exports
/securenfs *.example.com(rw,sec=krb5p)
(save and exit the file)
(vi)Download the keytab and store it in /etc/krb5.keytb file.
# wget http://classroom.example.com/pub/keytabs/server9.keytab -O
/etc/krb5.keytab
(vii)Export the shared the directory.
# exportfs -rv
(viii) Restart and enable the NFS services in RHEL - 6 and RHEL - 7.
# service nfs restart (restart the NFS
service in RHEL - 6)
# service nfs-secure-server restart (restart the secure NFS service in
RHEL - 6)
# chkconfig nfs on (enable the NFS
service in RHEL - 6)
# systemctl restart nfs-server (restart
the NFS service in RHEL - 7)
# systemctl restart nfs-secure-server (restart the secure
NFS service in RHEL - 7)
Ratnakar Page 94
Abhisol : RED-HAT LINUX 6/7
(ix) Enable the IPtables or firewall to allow NFS servicein RHEL - 6 and RHEL - 7 as follows.
In RHEL - 6 :
(i) # setup
(a) Select Firewall Configuration.
(b) Select Customize ( Make sure firewall option remain selected ).
(c) Select NFS4 ( by pressing spacebar once ).
(d) Select Forward and press Enter.
(e) Select eth0 and Select Close button and press Enter.
(f) Select ok and press Enter.
(g) Select Yes and press Enter.
(h) Select Quit and press Enter.
(ii) Now open /etc/sysconfig/iptables file and add the following rules under the rule for
port 2049 and save file.
-A INPUT -m state --state NEW -m udp -p udp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 111 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 32803 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 32769 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 892 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 875 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 875 -j ACCEPT
-A INPUT -m state --state NEW -m tcp -p tcp --dport 662 -j ACCEPT
-A INPUT -m state --state NEW -m udp -p udp --dport 662 -j ACCEPT
(iii) Restart the IP tables service by # service iptables restart command.
(iv) Make the IP tables service as permanent from next boot onwards as follows.
# chkconfig iptables on
In RHEL - 7 :
# firewall-cmd --permanent -add-service=nfs (to enable the nfs
service at firewall)
# firewall-cmd --permanent -add-service=mountd (to enable
the mountd service at firewall)
# firewall-cmd --permanent -add-service=rpc-bind (to enable
the rpc-bind service at firewall)
# firewall-cmd --complete-reload (to reload the firewall)
14. How to access the secure NFS server on client side?
(i) Install the nfs-utils package.
# yum install nfs-utils* -y
(ii) Download the same key tab and store it in /etc/krb5.keytab file.
# wget http://classroom.example.com/pub/keytabs/desktop9.keytab -O
/etc/krb5.keytab
(iii) Check the shared NFS directory.
# showmount -e server9.example.com
(iv) Restart the secure NFS service on client side.
# service nfs-secure restart (restart the secure
NFS client service in RHEL - 6)
# systemctl restart nfs-secure (restart the secure
NFS client service in RHEL - 7)
(v) Create the mount point on client system.
# mkdir /mnt/nfssecure
(vi) Mount the NFS shared directory on the local mount point temporarily.
# mount server9.example.com:/securenfs /mnt/nfssecure
(vii) Open /etc/fstab file and put an entry of the NFS shared mounting details to mount it
permanently.
# vim /etc/fstab
server9.example.com:/securenfs /mnt/nfssecure nfs defaults,sec=krb5p
0 0
(save and exit the file)
Ratnakar Page 95
Abhisol : RED-HAT LINUX 6/7
(viii) Mount all the file systems which are having the entries of the /etc/fstab file.
# mount -a
(ix) Check all the mounted file systems with file system type on client system.
# df -hT
15. How to mention the NFS version while configuring?
(i) Open /etc/sysconfig/nfs file by # vim /etc/sysconfig/nfs command.
(ii) Go to line no. 13 and edit the line as below,
RPCNFSDARGS=" - 4.2 "
(iii) Save and exit this file.
16. How to add the LDAP user shared directory and how the LDAP user access that directory on client?
(i) Create a sub-directory in /securenfs directory.
# mkdir /securenfs/secure
(ii) Change the ownership of the above sub directory to LDAP user.
# chown ldapuser9 /securenfs/secure
(iii) Assign the full permissions on that directory to LDAP user.
# setfacl -m u : ldapuser9 : rwx /securenfs/secure
(iv) Change the SELinux context of that directory if SELinux is enabled.
# chcon -t public_content_t /securenfs/secure
(v) Re-export the secure NFS shared directory.
# exportfs -rv
(vi) Restart the NFS services.
# service nfs restart (restart the NFS
service In RHEL - 6)
# service nfs-secure-server restart (restart the secure NFS service
In RHEL - 6)
# systemctl restart nfs (restart the NFS
service In RHEL - 7)
# systemctl restart nfs-secure (restart the secure NFS
service In RHEL - 7)
On Client side :
(i) Login as LDAP user on local system through ssh.
# ssh ldapuser9@localhost (type yes and press Enter
if it asks (yes/no))
(ii) Type the password as kerberos if it asks the LDAP user password.
(iii) Go to that secure NFS shared mount point and access the contents.
$ cd /mnt/nfssecure (to
access the mount point)
$ ls (to
see the contents in that)
$ cd secure (to
access the sub directory)
$ ls (to
see the contents in that)
$ exit (to exit or
logout from ssh)
17. What are the advantages of NFS?
(i) NFS allows multiple computers can use same files, because all the users on the network or
domain can access the same data.
(ii) NFS reduces the storage costs by sharing applications on computers instead of allocating
local disk space for each user application.
(iii) NFS provides data consistency and reliability, because all users can read same set of files.
(iv) NFS supports heterogeneous environments which are compatible to NFS.
(v) NFS reduces System Administration overhead.
18. Remote user cannot mount the NFS shared directory. How to resolve this?
(i) First check the user belongs to the same domain as the NFS shared or not. ie., the user's
system domain and NFS shared system domain should communicate.
Ratnakar Page 96
Abhisol : RED-HAT LINUX 6/7
Ratnakar Page 97
Abhisol : RED-HAT LINUX 6/7
( * Where timeout=60 means, if the directory is not used for 60 seconds then the shared
directory is unmounted automatically. And the default is 5 minutes.)
(iii) Open /etc/auto.misc file by # vim /etc/auto.misc and types as below.
< Client temporary mount point >-<permissions><IP address or hostname of the server> :
<shared
name>
Example :
nfs -ro (or) -rw classroom.example.com:/public (save and
exit this file)
( * where -ro means read-only and -rw means read-write)
(iv) Restart the autofs service in RHEL -6 and RHEL - 7.
# service autofs restart (restart the autofs service in
RHEL - 6)
# chkconfig autofs on (enable the autofs service at next
boot in RHEL - 6)
# systemctl restart autofs (restart the autofs
service in RHEL - 7)
# systemctl enable autofs (enable the autofs service at next
boot in RHEL - 7)
(iv) Goto the Client local mount point which is entered in /et/auto.master file by # cd <mount
point> command.
Example :
# cd /mnt
(v) Goto the Client temporary mount point which is entered in /etc/auto.misc file as below.
# cd /mnt/<temporary mount point>
Example :
# cd nfs
# pwd (the
output is /mnt/nfs)
24. What is LDAP server?
LDAP (Lightweight Directory Access Protocol) is a software protocol for enabling anyone to locate
organizations, individuals, and other resources such as files and devices in a network, whether on the public
Internet or on a corporate intranet.LDAP is lighter because in its initial version it did not include security
features.
25. What is LDAP client?
LDAP Client is a network user creation and activity. LDAP user means network user. Network user
means login the user through network. If a user wants to login to the remote system, the LDAP user should be
created and login to the remote system through LDAP account.
Upto RHEL -5 for this NIS (Network Information System) is used. From RHEL - 6 onwards LDAP is
using. The main feature of the LDAP is to share the users information in network.
26. What are the requirements of LDAP and explain it?
(i) Packages.
(a) authconfig-gtk (to configure the LDAP client)
(b) sssd (system security service deamon)
(ii) LDAP client configuration file is /etc/ldap.conf
(iii) LDAP kerberos configuration file is /etc/krb5.conf
(iv) sssd (systems security service deamon) deamon.
(v) LDAP port no. is 389.
(vi) sssd deamon responsibility is retrieving and caching the authentication information.
(vii) The configuration file of sssd is /etc/sssd/sssd.conf
(viii) Through NIS the data is transferred in plain text format. So, there is no security. But LDAP will
transfer the data in encrypted format. So, the data will be in secured way.
(ix) LDAP is used by default sssd ie., kerberos.
27. What are the requirements for LDAP client?
(i) dc (domain controller)
Example : If the domain is example.com then dc=example, dc=com
(ii) ldap server
Ratnakar Page 98
Abhisol : RED-HAT LINUX 6/7
Example : ldap://classroom.example.com
(iii) Authentication certificate (example-ca.crt) is located in http://classroom.example.com/pub
directory.
28. How to configure the LDAP client?
(i) Create the LDAP user.
(ii) Configure the kerberos.
(iii) configure the NFS automount to share the LDAP user's home directory.
So, LDAP + NFS + sssd is the LDAP system.
* LDAP is used share the user name and password to remote system.
* sssd is used to authenticate in secured communication.
* NFS is used to share the user's home directory to remote system.
Steps :
(i) Install the LDAP + kerberos packages by the following commands.
# yum groupinstall directory* -y (installation
in RHEL - 6)
# yum install authconfig-gtk* sssd* -y (installation
in RHEL - 7)
* The LDAP packages are different in RHEL - 6 and RHEL - 7 but, the configuration of LDAP
is same in both the versions.
(ii) Create the LDAP users and passwords in the LDAP server.
(iii) Configure the LDAP user's authentication by # system_config_authentication command in
graphical user interface.
(iv) The above command will display the configuration window and in that select and type the option
as below.
User Account Database = LDAP
LDAP search base on = dc=example, dc=com
LDAP server = ldap://classroom.example.com/
Enable TLS to encrypt = Click on Download CA Certificate button and then
enter the url as,
http://classroom.example.com:/pub/example-ca.crt
Authentication Method = LDAP Password (then click on
Apply button)
(v) Check whether the LDAP user is configured or not by # getent password ldapuser9
command.
29. How to mount the LDAP user's home directory automatically when demand using Autofs tool?
(i) Install the autofs package by # yum install autofs* -y command.
(ii) Open the /etc/auto.master file by # vim /etc/auto.master command and type as below.
/home/guests /etc/auto.misc
(save and exit this file)
(iii) Open the /etc/auto.misc file by # vim /etc/auto.misc command and type as below.
ldapuesr9 -rw classrrom.example.com:/home/guests/ldapuser9 (save and
exit this file)
(iv) Restart the autofs services.
# service autofs restart (restart the autofs service in RHEL - 6)
# chkconfig autofs on (enable the autofs service at
next boot in RHEL - 6)
# systemctl restart autofs (restart the autofs service in
RHEL - 7)
# systemctl enable autofs (enable the autofs service at
next boot in RHEL - 7)
Ratnakar Page 99
Abhisol : RED-HAT LINUX 6/7
3. What are the different file systems for sharing different O/S?
(i) Windows --- Windows -----> Distributed File system (DFS)
(ii) Linux --- Linux -----> Network File system (NFS)
(iii) Unix --- Unix -----> Network File system (NFS)
(iv) Apple MAC --- Apple MACs -----> Apple File sharing Protocol (AFP)
(v) Windows --- Linux -----> Common Internet File system (CIFS)
4. What are the requirements or what is the profile of Samba?
(i) Packages : samba* for samba server and samba-client* for samba client
(ii) Deamons : smbd and nmbd for RHEL - 6 where as smbd is for Samba
server deamon and
nmbd is for Netbios service deamon.
smb and nmb for RHEL - 7 where as smb is for Samba
server deamon and nmb
is for Netbios service deamon.
(iii) Scripting files : /etc/init.d/smb and /etc/init.d/nmb
(iv) Port number : 137 ---> to verify the share name, 138 ---> to data transfer,
139 ---> to connection establish and 445 ---> for
authentication
(v) Log file : /var/log/samba
(vi) Configuration : /etc/samba/smb.conf
(vii) File systems : CIFS (Common Internet File system)
5. How to configure the Samba server?
(i) Install the samba package by # yum install samba* -y command.
(ii) Create a samba shared directory by # mkdir /samba command.
(iii) Modify the permissions of the above samba shared directory.
# chmod 777 /samba
(iv) Modify the SELinux context of the samba directory if SELinux is enabled.
# chcon -t samba_share_t /samba
(v) Create the samba user and assign the password for the samba user.
# useradd raju (to
create the samba user)
# smbpasswd -a raju (to assign the samba
password for the user raju)
(vi) Assign the ACL permissions (like read, write and execute) to the above shared directory if
it is necessary.
# setfacl -m u : <user name> : rwx <samba shared name>
Example : # setfacl -m u:raju:rwx /samba
(vii) Open the samba configuration file and put an entries of the Samba configuration.
# vim /etc/samba/smb.conf
Go to last line and copy the last 7 lines and paste them at last. And then modify as
below.
[samba] (this is the
samba shared name)
comment = public stuff (this
is a comment for samba)
path = /samba
(share directory name with full path) public = yes
(means no authentication)
= no (means
requires authentication)
writable = yes (in
read-write mode)
= no (in
read only mode)
printable = no
(printing is not available)
= yes
(printing is available)
write list = raju (to give the
write permission to user raju)
= + <group name> (to give the
write permission to the group)
valid users = raju, u2 or @group 1, @group 2 (to give the
authentication to the users or groups)
hosts allow = IP 1 or IP 2 or host 1 or host2 or <host network ID> or <host
network ID> (to share the
directory to IP 1 or IP 2 or host 1 or host2)
work group = <windows work group name> (to share the directory
to the windows work group)
create mask = 644 (the files created by samba
users with 644 permission)
directory mask = 744 or 755 (the directories created by samba users with
744 or 755 permissions)
(save and exit the configuration file)
(viii) Verify the configuration file for syntax errors by # testparm command.
(ix) Restart the samba deamons in RHEL - 6 and RHEL - 7.
# service smbd nmbd restart (to restart
the samba services in RHEL - 6)
# chkconfig smbd nmbd on (to enable the samba
services at next boot in RHEL - 6)
# systemctl restart smb nmb (to restart
the samba services in RHEL - 7)
# systemctl enable smb nmb (to enable the samba
services at next boot in RHEL - 7)
(x) Add the samba service to IP tables and Firewall.
# setup (then select Firewall configuration option to add the service to IP
tables in RHEL - 6)
# service iptables restart (to restart the
IP tables in RHEL - 6)
# firewall-cmd --permanent --add-service=samba (to add the samba service to
firewall in RHEL - 7) # firewall-cmd --complete-reload
(to reload the firewall in RHEL - 7)
6. How to access the samba share directory at client side?
(i) Install client side samba packages by # yum install samba-client* cifs-utils -y
command.
(ii) Check the samba shared directory names from client side.
# smbclient -L //<host name or IP address of the server> (then it will ask
password, here don't enter any
password because it does not require any password)
Example :# smbclient -L //server9.example.com or 172.25.9.11
(iii) connect the samba server with user credentials and access the samba shared directory.
# smbclient //<host name or IP address of the server>/<shared directory name>
-U <samba user name>(Where U is Capital Letter and we have to enter
the user's samba password)
Example : # smbclient //server9.example.com/samba -U raju (then smb :/>
prompt appears)
smb:/> ls (to see the contents of
the samba shared directory)
smb:/> pwd (to see the present
working directory)
smb:/> ! ls (to see the client's
local directory contents)
server.A local NTP or Chrony server on the network can be synchronized with an external timing source to
keep all the servers in your organization in-sync with an accurate time.
2. What are the differences between NTP and Chrony?
NTP Chrony
This is used in RHEL - 6. This is used in RHEL - 7.
Package is ntp or system-config-date. Package is chrony.
It's deamon is ntpd and Port number is 123. It's deamon is chronyd and Port number is 123.
We have to install the package manually. By default this package is installed.
# chronyc sources -v (to check chrony is
# ntpq -p (to check ntp is configured or not).
configured or not).
Configuration file is /etc/ntp.conf Configuration file is /etc/chrony.conf
Log file is /var/log/ntpstat Log file is /var/log/chrony
allow-update { none; };
};
* Go to line number 31 and copy 5 lines and paste them at last of the file.
zone "<Three octets of the DNS server IP address> . in . addr . arpa" IN {
type-master;
file "<reverse lookup zone file name>";
allow-update { none; };
};
Example : zone "9.25.172 . in . addr . arpa" IN {
type-master;
file "named.reverse";
allow-update { none; };
};
(save and exit this file)
(iv) Copy /var/named/named.localhost file to /var/named/named.forward and edit as follows.
# cp -p /var/named/named.localhost /var/named/named.forward
# vim /var/named/named.forward
* Go to line number 2 and edit as follows.
@ IN SOA <DNS server fully qualified domain name> . com root . <domain
name> . {
* Go to line number 8 and edit as follows.
NS <DNS server fully qualified domain name> .
A <DNS server IP address>
<DNS server fully qualified domain name> IN A <DNS server IP address>
<Client 1 fully qualified domain name> IN A <Client 1 IP address>
<Client 2 fully qualified domain name> IN A <Client 2 IP address>
<Client 3 fully qualified domain name> IN A <Client 3 IP address>
www IN CNAME <DNS server fully qualified domain name>
Example : The line number 2 should be edited as follows.
@ IN SOA server9.example.com. root.example.com. {
The line number 8 should be edited as follows.
NS server9.example.com.
A 172.25.9.11
server9.example.com. IN A 172.25.9.11
client9.example.com. IN A 172.25.9.10
client10.example.com. IN A 172.25.9.12
client11.example.com. IN A 172.25.9.13
www IN CNAME server9.example.com. (save and
exit this file)
(v) Copy /var/named/named.empty file to /var/named/named.reverse and edit as follows.
# cp -p /var/named/named.empty /var/named/named.reverse
# vim /var/named/named.reverse
* Go to line number 2 and edit as follows.
@ IN SOA <DNS server fully qualified domain name> . com root . <domain
name> . {
* Go to line number 8 and edit as follows.
NS <DNS server fully qualified domain name> .
<Last octet of the DNS server IP address> IN PTR <DNS server fully qualified
domain name>
<Last octet of the Client 1 IP address> IN PTR <Client 1 fully qualified domain
name>
<Last octet of the Client 2 IP address> IN PTR <Client 2 fully qualified domain
name>
<Last octet of the Client 3 IP address> IN PTR <Client 3 fully qualified domain
name>
<DNS server fully qualified domain name> IN A <DNS server IP address>
www IN CNAME <DNS server fully qualified domain name>
* Go to line number 19 and copy 5 lines and paste them at last of the file.
zone "<domain name>" IN {
type-slave;
file "slaves/<forward lookup zone file name>";
master { <Primary DNS server IP address; };
};
Example : zone "example.com" IN {
type-slave;
file "slaves/named.forward";
master { 172.25.9.11; };
};
* Go to line number 31 and copy 5 lines and paste them at last of the file.
zone "<Three octets of the DNS server IP address> . in . addr . arpa" IN {
type-slave;
file "slaves/<reverse lookup zone file name>";
master { <Primary DNS server IP address; };
};
Example : zone "9.25.172 . in . addr . arpa" IN {
type-slave;
file "slaves/named.reverse";
master { 172.25.9.11; };
};
(save and exit this file)
(iv) Copy /var/named/slaves/named.localhost to /var/named/slaves/named.forward and edit as
follows.
# mkdir /var/named/slaves
# cp -p /var/named/slaves/named.localhost /var/named/slaves/named.forward
# vim /var/named/slaves/named.forward
* Go to line number 2 and edit as follows.
@ IN SOA <secondary DNS server fully qualified domain name> . com
root . <domain name> . {
* Go to line number 8 and edit as follows.
NS <DNS server fully qualified domain name> .
A <DNS server IP address>
<secondary DNS server fully qualified domain name> IN A <secondary DNS server
IP address>
<DNS server fully qualified domain name> IN A <DNS server IP address>
<Client 1 fully qualified domain name> IN A <Client 1 IP address>
<Client 2 fully qualified domain name> IN A <Client 2 IP address>
<Client 3 fully qualified domain name> IN A <Client 3 IP address>
www IN CNAME <DNS server fully qualified domain name>
Example : The line number 2 should be edited as follows.
@ IN SOA server6.example.com. root.example.com. {
The line number 8 should be edited as follows.
NS server6.example.com.
A 172.25.6.11
server6.example.com. IN A 172.25.6.11
server9.example.com. IN A 172.25.9.11
client9.example.com. IN A 172.25.9.10
client10.example.com. IN A 172.25.9.12
client11.example.com. IN A 172.25.9.13
www IN CNAME server6.example.com. (save and
exit this file)
(v) Copy /var/named/slaves/named.empty file to /var/named/slaves/named.reverse and edit
as follows.
# cp -p /var/named/slaves/named.empty /var/named/slaves/named.reverse
# vim /var/named/slaves/named.reverse
A Web server is a system that delivers content or services to end users over the Internet. A Web server
consists of a physical server, server operating system (OS) and software used to facilitate HTTP communication.
A computer that runs a Web site. Using the HTTP protocol, the Web server delivers Web pages to
browsers as well as other data files to Web-based applications. The Web server includes the hardware,
operating system, Web server software, TCP/IP protocols and site content (Web pages, images and
other files). If the Web server is used internally and is not exposed to the public, it is an "intranet server"
and if the Web server is used in the internet and is exposed to the public, it is an Internet server.
2. What is Protocol?
A uniform set of rules that enable two devices to connect and transmit the data to one another.
Protocols determine how data are transmitted between computing devices and over networks. They
define issues such as error control and data compression methods. The protocol determines the
following type of error checking to be used, data compression method (if any), how the sending device will
indicate that it has finished a message and how the receiving device will indicate that it has received the
message. Internet protocols include TCP/IP (Transmission Control Protocol / Internet Protocol), HTTP
(Hyper Text Transfer Protocol), FTP (File Transfer Protocol) and SMTP (Simple Mail Transfer Protocol).
3. How a Web server works?
(i) If the user types an URL in his browsers address bar, the browser will splits that URL into a
number of separate parts including address, path name and protocol.
(ii) A DNS (Domain Naming Server) translates the domain name the user has entered into its
IP address, a numeric combination that represents the site's true address on the internet.
(iii) The browser now determines which protocol (rules and regulation which the client machine used
to communicate with servers) should be used. For example FTP (File Transfer
Protocol) and HTTP (Hyper Text Transfer Protocol).
(iv) The server sends a GET request to the Web Server to retrieve the address it has been given. For
example when a user types http://www.example.com/Myphoto.jpg , the browser sends a
GET Myphoto.jpg command to example.com server and waits for a response. The
server now responds to the browser's requests. It verifies that the given address exist, finds
the necessary files, runs the appropriate scripts, exchanges cookies if necessary and returns
the results back to the browser. If it cannot locate the file, the server sends an error message to
the client.
(v) Then the browser translates the data it has been given into HTML and displays the results to
the user.
4. In how many ways can we host the websites?
IP based Web Hosting :
IP based web hosting is usedIP address or hostname web hosting.
Name based Web Hosting :
Hosting the multiple websites using single IP address.
Port based Web Hosting :
Web hosting using another port number ie., other than the default port number.
User based Web Hosting :
We can host the Web sites using the user name and password.
* If we want to configure the httpd server, we have to follow the ISET rules. where I - Install, S-
Start,
E - Enable and T - Test.
* To access the websites using the CLI mode e-links, curl tools are used and to access the
websites using
the browser in Linux Firefox is used.
7. How to make the http web server available to the cleint?
(a) First assign the static IP address and hostname to the server.
(b) Check whether the server package by # rpm -qa httpd* command.
(c) If not installed, install the web server package by # yum install httpd* -y command.
(d) Start the web server and enable web server service at next boot.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(e) Open the browser and access the web server document.
# firefox (to open the
firefox browser)
* Then in address bar type as http://localhost/manual and press Enter key.
8. How to configure the IP based virtual host Web server?
(a) First assign the static IP address and hostname to the server.
(b) Check whether the server package by # rpm -qa httpd* command.
(c) If not installed, install the web server package by # yum install httpd* -y command.
(d) Check the configuration file to configure the http web server by # rpm -qac httpd
command.
(e) If required open the web server document by # rpm -qad httpd command.
(f) Go to the configuration file directory by # cd /etc/httpd/conf.d
(g) Create the configuration for IP based hosting.
# vim /etc/httpd/conf.d/ip.conf
<VirtualHost <IP address of the web server> : 80>
ServerAdmin root@<hostname of the web server>
ServerName <hostname of the web server>
DocumentRoot /var/www/html
</VirtualHost>
<Directory "/var/www/html">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
Example :
# vim /etc/httpd/conf.d/ip.conf (create the
configuration file)
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName server9.example.com
DocumentRoot /var/www/html
</VirtualHost>
<Directory "/var/www/html">
AllowOverride none
Require All Granted
</Directory>
<Directory "/var/www/virtual">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
Example :
# vim /etc/httpd/conf.d/virtual.conf (create
the configuration file)
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName www9.example.com
DocumentRoot /var/www/virtual
</VirtualHost>
<Directory "/var/www/virtual">
AllowOverride none
Require All Granted
</Directory>
(d) Go to named based virtual directory and create the index.html file.
# cd /var/www/virtual
# vim index.html
<html>
<H1>
This is Name based Web Hosting
</H1>
</html>
(save and exit this file)
(e) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(f) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
# firewall-cmd --complete-reload
(g) Go to client system, open the firefox browser and type as http://www9.example.com in
address bar and check the index page is displayed or not.
(h) We can also access the website using elinks CLI tool.
# yum install elinks* -y (install the
elinks package)
# elinks --dump www9.example.com (access the
index page)
10. How to configure the port based web hosting?
(a) Make a directory for port based hosting.
# mkdir /var/www/port
(b) Go to the configuration file directory by # cd /etc/httpd/conf.d
(c) Create the configuration for port based hosting.
# vim /etc/httpd/conf.d/port.conf
<VirtualHost <IP address of the web server> : 8999>
ServerAdmin root@<hostname of the web server>
ServerName <port based hostname of the web server>
DocumentRoot /var/www/port
</VirtualHost>
<Directory "/var/www/port">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
Example :
# vim /etc/httpd/conf.d/virtual.conf (create
the configuration file)
<VirtualHost 172.25.9.11:8999>
ServerAdmin [email protected]
ServerName port9.example.com
DocumentRoot /var/www/port
</VirtualHost>
<Directory "/var/www/port">
AllowOverride none
Require All Granted
</Directory>
(d) Go to port based virtual directory and create the index.html file.
# cd /var/www/port
# vim index.html
<html>
<H1>
This is Port based Web Hosting
</H1>
</html>
(save and exit this file)
(e) Generally port based web hosting requires DNS server. So, we can solve this problem by the
following way.
For that open the /etc/hosts file enter the server name and IP addresses on both
server and client.
# vim /etc/hosts
172.25.9.11 port5.example.com
(save and exit this file)
(f) By default the web server runs on port number 80. If we want to configure on deferent port
number, we have to add the port number in the main configuration file.
# vim /etc/httpd/conf/httpd.conf
* Go to Listen : 80 line and open new line below this line and type as,
Listen : 8999
(save and exit this file)
(g) By default SELinux will allow 80 and 8080 port numbers only for webserver. If we use
different port numbers other than 80 or 8080 then execute the following command.
# semanage port -a -t http_port_t -p tcp 8999
(h) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(i) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 8999 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 8999 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
<Directory "/var/www/html">
AllowOverride none
Require All Granted
AuthType Basic
AuthName "This site is protected"
AuthUserFile /etc/httpd/pass
Require User <user name>
</Directory>
(save and exit this file)
Example :
# vim /etc/httpd/conf.d/userbase.conf (create the
configuration file)
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName server9.example.com
DocumentRoot /var/www/html
</VirtualHost>
<Directory "/var/www/html">
AllowOverride none
Require All Granted
AuthType Basic
AuthName "This site is protected"
AuthUserFile /etc/httpd/pass
Require User raju
</Directory>
(h) Go to document root directory and create the index.html file.
# cd /var/www/html
# vim index.html
<html>
<H1>
This is User Authentication based Web Hosting
</H1>
</html>
(save and exit this file)
(i) Restart the web server deamon.
<Directory "/var/www/html">
AllowOverride none
Require All Granted
Order Allow, Deny
Allow from 172.25.9.0 or 172.25.0 (allows 172.25.9 network or 172.25 network to
access the websites)
Deny from .my133t.org (deny all the systems of *.my133t.org domain to access the
websites)
</Directory>
13. How to Redirect the website?
* Redirecting means whenever we access the website, it redirects to another website.
(a) Go to the configuration file directory by # cd /etc/httpd/conf.d
(b) Create the configuration for redirect based hosting.
# vim /etc/httpd/conf.d/rediect.conf
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName server9.example.com
DocumentRoot /var/www/html
Redirect / "http://www.google.com"
</VirtualHost>
<Directory "/var/www/html">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
(c) Go to document root directory and create the index.html file.
# cd /var/www/html
# vim index.html
<html>
<H1>
This is Redirect based Web Hosting
</H1>
</html>
(save and exit this file)
(d) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(e) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
# firewall-cmd --complete-reload
(f) Go to client system, open the firefox browser and type as http://server9.example.com in
address bar and check the redirection google web page is displayed or not.
(g) We can also access the website using elinks CLI tool.
# yum install elinks* -y (install the
elinks package)
# elinks --dump server9.example.com (access the
index page)
* This website redirects to the google website.
14. How to configure the website with alias name?
(a) Go to the configuration file directory by # cd /etc/httpd/conf.d
(b) Create the configuration for alias based hosting.
# vim /etc/httpd/conf.d/alias.conf
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName server9.example.com
DocumentRoot /var/www/html
Alias /private /var/www/html/private
</VirtualHost>
<Directory "/var/www/html/private">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
(c) Create private directory in /var/www/html.
# mkdir /var/www/html/private
(c) Go to document root private directory and create the index.html file.
# cd /var/www/html/private
# vim index.html
<html>
<H1>
This is Alias based Web Hosting
</H1>
</html>
(save and exit this file)
(d) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(e) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
# firewall-cmd --complete-reload
(f) Go to client system, open the firefox browser and type as
http://server9.example.com/privae in address bar and check the private or alias based web page is
displayed or not.
(g) We can also access the website using elinks CLI tool.
# yum install elinks* -y (install the
elinks package)
# elinks --dump server9.example.com/private (access the
index page)
15. How to configure the directory based web hosting?
(a) Go to the configuration file directory by # cd /etc/httpd/conf.d
(b) Create the configuration for direct based hosting.
# vim /etc/httpd/conf.d/confidential.conf
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName server9.example.com
DocumentRoot /var/www/html
</VirtualHost>
<Directory "/var/www/html/confidential">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
<Directory "/var/www/html">
AllowOverride none
Require All Granted
</Directory>
(save and exit this file)
(iii) Go to document root directory by # cd /var/www/html command.
(iv) # vim userpage.html
<html>
<H1>
This is userpage as home page web hosting
</H1>
</html>
(save and exit this file)
(d) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(e) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
# firewall-cmd --complete-reload
(f) Go to client system, open the firefox browser and type as http://server9.example.com
in address bar and check the user defined web page is displayed or not.
(g) We can also access the website using elinks CLI tool.
# yum install elinks* -y (install the
elinks package)
# elinks --dump server9.example.com (access the
index page)
17. How to configure CGI based web hosting?
CGI content will change dynamically every time the client accessed it. Normal web server will not be
used to support this type of web hosting. To access these dynamic pages, we have to configure the web server
as ".wsgi" server. The following steps will configure the CGI web server.
(a) Install the CGI package by # yum install mod_wsgi* -y command.
(b) Download or create the CGI script file in web server's document root directory.
Example : # cp webapp.wsgi /var/www/html
(c) Create the configuration file for CGI based web hosting.
<VirtualHost 172.25.9.11:80>
ServerAdmin [email protected]
ServerName webapp9.example.com
DocumentRoot /var/www/html
WSGIScriptAlias / /var/www/html/webapp.wsgi
</VirtualHost>
(d) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(e) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 80 -j ACCEPT
</VirtualHost>
<Directory "/var/www/html">
AllowOverride
Require All Granted
</Directory>
(save and exit this file)
(e) Go to document root directory by # cd /var/www/html command.
(f) # vim index.html
<html>
<H1>
This is a secured web hosting
</H1>
</html>
(save and exit this file)
(g) Restart the web server deamon.
# service httpd start (to start the webserver
deamon in RHEL - 6)
# chkconfig httpd on (to enable the service at
next boot in RHEL - 6)
# systemctl restart httpd (to start the webserver deamon in
RHEL - 7)
# systemctl enable httpd (to enable the service at next boot in
RHEL - 7)
(h) Add the service to the IP tables and firewall.
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 443 -j ACCEPT
# iptables -A OUTPUT -i eth0 -p tcp -m tcp --deport 443 -j ACCEPT
# service iptables save
# service iptables restart
In RHEL - 7 :
# firewall-cmd --permanent --add-service=http
# firewall-cmd --permanent --add-service=https
# firewall-cmd --complete-reload
(i) Go to client system, open the firefox browser and type as https://server9.example.com/
in address bar and check the secured web page is displayed or not.
21. How to generate our own private and public keys using crypto-utils package?
(i) Install the package by # yum install crypto-utils* -y command.
(ii) Create our own public and private keys by # genkey <hostname of the server>
command.
Example : #genkey server9.example.com (one window will be opened and we have to
enter the details)
Click on Next ---> Don't change the default size ---> Next ---> No ---
>The keys are generated in
their directories.
Other useful commands :
# httpd -t (to check the web server configuration file
for syntax errors)
an advanced mobile phone or Smartphone, with e-mail capabilities, can be regarded as a client computer in
these circumstances.
2. How many types of mail servers available in Linux?
There are two types of mail servers.
(i) Sendmail server (default in RHEL - 5, available in 6 and 7)
(ii) Postfix (default in RHEL - 6 and 7)
These both mail server are used to send and receive the mails, but we cannot used both mail servers
at a time ie., we have to use only one server at a time. These mail servers are used as CLI mode. Outlook
express in windows is used to send or receive the mails. Thunderbird is used to send or receive the mails
using GUI mode in Linux. # mail is the command used to send the mails in CLI mode.
3. What are MUA, MTA, SMTP, MDA and MRAs?
MUA :
MUA stands for Mail User Agent. It is the e-mail client which we used to create-draft-send emails.
Generally Microsoft Outlook, Thunderbird, kmail, ....etc., are the examples for MUAs.
MTA :
MTA stands for Mail Transfer Agent. It is used to transfer the messages and mails between
senders and recipients. Exchange, Qmail, Sendmail, Postfix, ....etc., are the examples for MTAs.
SMTP:
SMTP stands for Simple Mail Transfer Protocol. It is used to transfer the messages and mails
between the MTAs.
MDA :
MDA stands for Mail Delivery Agent. It is a computer software component that is responsible for
the delivery of e-mail messages to a local recipient's mailbox. Within the Internet mail architecture, local
message delivery is achieved through a process of handling messages from the message transfer agent, and
storing mail into the recipient's environment (typically a mailbox).
MRA :
MRA stands for Mail Retrieval Agent. It is a computer application that retrieves or fetches e-
mail from a remote mail server and works with a mail delivery agent to deliver mail to a local or remote email
mailbox. MRAs may be external applications by themselves or be built into a bigger application like an MUA.
Significant examples of standalone MRAs include fetchmail, getmail and retchmail.
4. What is the profile of mail server?
Package : sendmail (in RHEL - 5, 6 and 7) or postfix (in RHEL - 6 and 7).
Configuration file : /etc/postfix/main.cf, /etc/dovecot/dovecot.conf
Log file : /var/log/mail.log
User's mails location : /var/spool/mail/<user name>
root user's mail location : /var/spool/mail/root
Deamons : postfix
Port number : 25
5. How to configure the mail server?
The pre-requisite for mail server is DNS. ie., Domain Naming System should be configured first.
(i) Check the hostname of the server by # hostname command.
(ii) Install the mail server package by # yum install postfix* dovecot* -y command.
(iii) Open the mail configuration file and at last type as below.
# vim /etc/postfix/main.cf
myhostname = server9.example.com
mydomain = example.com
myorigin = $mydomain
inet_interfaces = $myhostname, localhost
mydestination = $myhostname, localhost.$localdomain, localhost, $mydomain
home_mailbox = Maildir /
(save and exit this file)
(iv) Open the another configuration file and at last type as below.
# vim /etc/dovecot/dovecot.conf
protocols = imap pop3 lmtp
(save and exit this file)
(v) Restart the mail server services.
$ cd Maildir
$ ls
$ cd new
$ cat <mail name>
Other useful commands :
* To send a mail to the local system, no need to configure the mail server.
* To send a mail to the remote system, then only we have to configure the mail server.
# mail [email protected] (to send the mail to the raju user of the
server9)
type the message whatever you want (press Ctrl + d to exit and send the
mail)
# su - raju (to switch to the raju user)
$ mail (to check the mails of the raju user)
N abcd
N efgh
N ijkl
N mnop (there are four mails in the
mail box)
& 1 (to read the 1st mail)
* If the mail is new one then 'N' letter is appears before the mail. If it is already seen then there
is no letter before the mail.
* press 'q' to quit the mail utility.
# mail or mutt -s " hello " <user name1><user name2><user name3>
type the matter whatever you want (press Ctrl + d to exit and send the
mail to 3 users)
$ mail (to see all the mail in the mail box)
&<type the mail number> (to read the specified mail by it's number)
& r (to send the replay mail to that user)
& p (to send the mail to the printer for
printing)
& w (to write the contents of the mail into a file, ie., save the contents of the mail ina file)
& q (to quit the mail box)
& d (to delete the mail)
& d <mail number> (to delete the specified mail by it's
number)
& d 1-20 (to delete the mails from 1 to
20 numbers)
# mail -s "hello" <user name>@<servername> . <domain name> (to send the mail to the
remote system)
# mailq (to see the mails in the queue)
* If the mail server is not configured or not running, then the sent mails will be in the queue.
# mail -s "hello" <user name1><user name2><<File name> (send the mail with attached
file to
the 2 users)
# postfixcheck (to verify the mail configuration file for
syntax errors)
name>="<value>";
Example : mysql or mariadb > update mydetails name="bangaram" where name='raju';
8. How to delete the table from the database?
mysql or mariadb > drop table <table name>;
Example : mysql or mariadb > drop table mydetails;
9. How to connect the remote database from our system?
# mysql -u root -h <host name> -p (here we have to enter the
password)
Example : # mysql -u root -h server9.example.com -p
(If the database is configured as localhost database, then server will not allow remote database
connections and Permission denied message will be displayed on the screen)
10. How to add mysqld service to IPtables and mariadb service to firewall?
In RHEL - 6 :
# iptables -A INPUT -i eth0 -p tcp -m tcp --deport 3306 -j ACCEPT
# iptables -A INPUT -p tcp -m tcp --deport 514 -j ACCEPT (to add the incoming
port no. to
Iptables in RHEL - 6)
# iptables -A INPUT -p udp -m udp --deport 514 -j ACCEPT (to add the
incoming port no. to
Iptables in RHEL - 6)
# iptables -A OUTPUT -p tcp -m tcp --deport 514 -j ACCEPT (to add the
outgoing port no. to
Iptables in RHEL - 6)
# iptables -A OUTPUT -p udp -m udp --deport 514 -j ACCEPT (to add
the outgoing port no. to
Iptables in RHEL - 6)
# firewall-cmd --permanent -add-port=514/tcp (to add the 514 tcp port no. to
the firewall)
# firewall-cmd --permanent -add-port=514/udp (to add the 514 udp port no. to
the firewall)
# firewall-cmd --complete-reload (to reload the firewall
configuration)
4. How to configure the client system to send log messages to the log server?
(i) Open the log server configuration file by # vim /etc/rsyslog.conf command.
(ii) Go to line no. 90 and type as below.
*.*@<log server IP address> : 514
Example : *.* @172.25.9.11:514 (save and
exit this file)
(iii) Restart the log server deamons in RHEL - 6 and RHEL - 7.
# service rsyslog restart (to restart the log server deamon in
RHEL - 6)
# chkconfig rsyslog on (to enable the log server deamon at next boot in
RHEL - 6)
# systemctl restart rsyslog (to restart the log server
deamon in RHEL - 7)
# systemctl enable rsyslog (to enable the log server deamon at
next boot in RHEL - 7)
* Then all the log messages are stored in /var/log/secure location.
* To monitor all the messages on the server by # tailf /var/log/secure command.
* Open the /etc/rsyslog.conf file and type as below to store all the client's log messages in
remote log server only.
# vim /etc/rsyslog.conf
*.* /var/log/secure
(save and exit this file)
* Then restart the log server deamons in RHEL - 6 and RHEL - 7.
# service rsyslog restart (to restart the log server deamon in
RHEL - 6)
# systemctl restart rsyslog (to restart the log server
deamon in RHEL - 7)
5. What is log file?
Log file is file that contains messages about that system, including the kernel, services and
applications running on it, ....etc., There are different log files for different information. These files are very
useful when trying to troubleshoot a problem with systems.
Almost all log messages are stored in /var/log directory. Only root user can read these log
messages. We can use less or more commands to read these log files. The messages will be generated only
when rsyslog service is running, otherwise the log messages will not be generated.
The different types of log files and their locations :
/var/log/messages -----> System and general messages and DHCP log messages.
/var/log/authlog -----> Authentication log messages.
/var/log/secure -----> Security and authentication and user log messages.
/var/log/maillog -----> Mail server log messages.
# iptables -A INPUT -i eth0 -p tcp --deport 22 -j ACCEPT (to add the rules to the
existing
iptables to allow ssh)
where -A ---> Add or append a rule to the INPUT chain for incoming traffic.
-i eth0 ---> Incoming packets through the interface eth0 will be verified against this
added new rule.
-p tcp -deport 22 ---> protocol is tcp and the destination port is 22.
-j ACCEPT ---> Accept the packet.
# iptables -A INPUT -p tcp -m state --state NEW -m tcp --deport 80 -j ACCEPT
# ping -a 192.168.10.1 (to ping the IP address with audiable ping ie., it
makes noises)
# shred -n 5 trail.txt (to over write the trail.txt file five times default
is 3 times)
# shred -u 5 trail.txt (to remove a file after
over writing)
* This shed tool may not work in journaling or RAID file systems.
# file <file name> (to know what type
file is that)
# mtr <IP address> (to check the connection between the source and the
destinations)
* The above command gives the report continuously until the user press Ctrl+c.
# htop (it is an improved top command and it allows to scroll vertically or
horizontally)
# logsave filelist.txt ls -l (to capture the output of any command and stores it in a file
along with the starting and ending
time of the command)
# look "printf" avltree.c (to display all the lines in a file that start with a particular
string and performance of this command
is more than grep)
# stat <file name> (to display the status of a file or file system like absolute path of the files,
the no of blocks used by the file, the I/O block size, inode access specifier, access time, time of modification,
....etc)
# mc (it is a powerful text based file manager and it is a directory browsing tool and
allows to see thecontents of the archived
files, ...etc.;)
* In RHEL - 6 we have to write the rules and regulations to allow or deny the system but, in RHEL - 7
we have
enable or disable the firewalld options only.
# firewall-config (to manage the firewalld services using graphical
user mode)
# firewall-cmd --get-zones (to display all
available zones)
# firewall-cmd --get-default-zone (to check the default zone, the default zone is
public zone)
# firewall-cmd --set-default-zone=work (to activate the work zone, nothing but
changing default
zone temporarily)
# firewall-cmd --permanent --set-default-zone=work (to set the default zone as work
permanently)
# firewall-cmd --get-activate-zones (to display which zone is an active with IP address and
interface eth0)
# firewall-cmd --add-service=172.25.0.0/24 --zone=public (to add the source to the public zone
temporarily)
# firewall-cmd --get-activate-zone (to see the default zone which is
activated)
# firewall-cmd --permanent -add-source=172.25.0.0/24 --zone=public
(to add the IP address to
public zone permanently)
# firewall-cmd --remove -souce =172.25.0.0/24 --zone=public (to remove the iP address from
public zone
temporarily)
# firewall-cmd --permanent --remove-source=172.25.0.0/24 --zone=public
(to remove the iP address from public
zone permanently)
26. Virtualization
1. What is virtualization?
Virtualization allows multiple operating system instances to run concurrently on a single computer;it is
a means of separating hardware from a single operating system. Each “guest” OS is managed bya
Virtual Machine Monitor (VMM), also known as a hypervisor. Because the virtualization system sitsbetween
the guest and the hardware, it can control the guests’ use of CPU, memory, and storage,even allowing
a guest OS to migrate from one machine to another.
2. What are types of virtualizations available in Linux?
RHEL - 5 : RHEL - 6 & 7 :
xen kvm
64 bit 64 bit
VT-Enabled VT-Enabled
Intel/AMD Intel/AMD
2 GB RAM 2 GB RAM
6 GB Hard disk 6 GB Hard disk
3. What are the packages of virtualization and how to install the packages?
(i) qemu (It is used to provide user level KVM virtualization and disk image also)
(ii) virt (It is used to provide virtualization software)
(iii) libvirt (It is used to provide the libraries for virtualization software)
(iv) python (This package provides the host and server libraries for interacting with Hypervisor
and
Host system)
# yum install qemu* virt* libvirt* python* -y (to install the
virtualization softwares)
4. How to start the virtualization manager and how to create a new virtual machine?
(i) Go to Applications -----> System Tools -----> Virtual Machine Manager
(ii) Vitual Machine Manager is used to check and displays the available virtual machines. It is
also used to create the new virtual machines.
(iii) To create a new virtual machine first click on monitor icon, then enter the virtual machine
name, Select Local and Select Forward.
(iv) Click on Browse Local, Select the guest O/S " . iso " image file and Select Forward.
5. What are the packages of Virtualization Hypervisor and how to install the packages?
(i) "virtualization hypervisor" (provides the foundation to host virtual machines includes
the libvirt and
qemu- kvm package)
(ii) "virtualization client" (provides the support to install and manage virtual
machines includes virsh, virt-install, virt-manager, virt-
top and virt-viewer packages)
(iii) "virtualization tools" (provides tools for offline management of virtual machines
includes the
libguestfs package)
(iv) "virtualization platform" (provides an interface to access and control virtual machines
includes the libvirt, libvirt-client and
virt-who packages)
Installation of Virtualization Hypervisor :
# yum group install "virtualization hypervisor" "virtualization client" "virtualization tools"
"virtualizatio
n platform" -y
(x) Sometimes backup failed due to backup port no. 13782 may be not working or in blocked state. It
can be checked by # netstat -ntulp | grep 13782 command.
(xi) If the media server and production server are not in the same domain, then backup may be failed.
(ie., production server domain name may be changed but no intimation to backup team
about that change, so media server is in another domain).
Backup Procedure :
(i) Deport the disk group on production server.
(ii) Import the disk group on backup (media) server.
(iii) Join the disk group with media server.
(iv) Sync the data with production server.
(v) Take the backup.
(vi) split the disk group from media server.
(vii) Join the disk group with production server.
(viii) Deport the disk group from media server.
(ix) Import the disk group on production server.
Backup policy :
(i) Complete (full) backup (every month ie., once in a month).
(ii) Incremental backup (Daily).
(iii) Differential or cumulative backup (every week end).
22. How to troubleshoot if the file system is full?
(i) First check whether the file system is O/S or other than O/S.
(ii) If it is other than O/S, then inform to that respective teams to house keep the file system (ie.,
remove the unnecessary files in those file system).
(iii) If not possible to house keep then inform to different teams (raise the CRQ (Change Request)) for
increasing the file system.
(a) First take business approval and raise the CRQ to monitoring team to ignore the alerts
from the system, stop the application team to stop the application and database team to stop the
database.
(b) Normally team lead or tech lead or manager will do this by initiate the mail thread.
(c) We will do this on weekend to reduce the business impact.
(iv) First take a backup of the file system then unmount the file system.
(v) Remove that partition and again create that file system with increased size, then mount again that
file system and restore the backup.
(vi) If the file system belongs to system log files or other log files and not to delete then they requested
us to provide one Repository server (only for log files). Normally one script will do automatically
redirect the log files to that repository server.
(vii) Sometimes we will delete file contents not the files to reduce the file sizes. For that we execute the
command # cat /dev/null ><file name with path> ie., nullifying the files.
(ix) If it is root file system or O/S file system,
(a) may be /opt full or may be /var full or may be /tmp full
(b) In /var/log/secure or /var/log/system or /var/tmp files may be full. If those files are
important then redirect them to other central repository server or backup those files and
nullifying those files.
(c) If /home directory is present in root ( / ) file system then this file system full will occur.
Generally /home will be separated from root file system and created as separate /home file
system. If /home is in root ( / ) as a directory then create a separate file system for
/home and copy those files and directories belongs to /home and remove that /home
directory.
(d) If root ( / ) is full then cannot login to the system. So, boot with net or CDROM in single
user mode and do the above said.
(x) Normally if file system is other than O/S then we will inform to that respective manager or owner
and take the permissions to remove unnecessary files through verbal permission or CRQ .
23. CPU utilization full, how to troubleshoot it?
(a) Normally we get these scenarios on weekends because backup team will take heavy backups.
(b) First check which processes are using more CPU utilization by # top and take a snap shot of that
user processes and send the snap shot and inform to that user to kill the unnecessary
process.
(c) If those processes are backups then inform to the backup team to reduce the backups by stopping
some backups to reduce the CPU utilization.
(d) Sometimes in peak stages (peak hours means having business hours) CPU utilization will full and
get back to the normal position automatically after some time (within seconds). But ticket raised
by monitoring team. So, we have to take a snap shot of that peak stage and attach that snap
shot to the raised ticket and close that ticket.
(e) Sometimes if heavy applications are running and not to kill (ie., business applications), then if any
spare processor is available or other low load CPUs available then move those heavy
application processes to those CPUs.
(d) If CPUs are also not available then if the system supports another CPU then inform to the data
centre people or CPU vendor to purchase new CPU though Business approval and move some
processes to the newly purchased CPUs.
24. How to troubleshoot when the system is slow?
(a) System slow means the end users response is slow.
(b) Check the Application file system, CPU utilization, memory utilization and O/S file system
utilization.
(c) If all are ok, then check network statistics and interfaces whether the interfaces are running in full
duplex mode or half duplex mode and check whether the packets are missing. If all are ok from our
side then,
(d) Inform to network team and other respective teams to solve this issue.
25. How to troubleshoot if the node is down?
(a) Check pinging the system. If pinging, then check whether the system is in single user mode or
not.
(b) If the system is in single user mode then put the system in multi user mode ie., default run
level by confirming with our team whether system is under maintenance or not.
(c) Check in which run level the system is running. If it is in init 1 it will not be able to ping. If it is
in init s then it will ping.
(d) In this situation also if it is not pinging then try to login through console port. If not possible
then inform to data centres people to hard boot the system.
(d) If connected through console port then we may get the console prompt.
26. How to troubleshoot if the memory utilization full?
(a) Check how much memory is installed in the system by # dmidecode -t memory command.
(b) Check the memory utilization by # vmstat -v command.
(c) Normally application or heavy backups utilize more memory. So, inform to application team
or backup team or other teams which team is utilizing the more memory to reduce the processes by
killing them or pause them.
(d) Try to kill or disable or stop the unnecessary services.
(e) If all the ways are not possible then inform to team lead or tech lead or manager to increase
the memory (swap space). If it is also not possible then taking higher authority's permissions to
increase the physical memory. For those we contact the server vendor and co-ordinate
with them through data centre people to increase the RAM size.
27. How to replace the failed hard disk?
(a) Check whether the disk is failed or not by # iostat -En | grep -i hard/soft command.
(b) If hard errors are above 20 then we will go for replacement of the disk.
(c) If the disk is from SAN people then we will inform to them about the replacement of the disk.
If it is internal disk then we raise the CRQ to replace the disk.
(d) For this we will considered two things.
(i) whether the system is within the warranty.
(ii) without warranty.
(e) We will directly call to the toll free no. of the system vendor and raise the ticket. They will
issue the case no. This is the no. we have to mention in all correspondences to vendor
regarding this issue.
(f) If it is having warranty they asks rack no. system no. and other details and replace the hard
disk with co- ordinate of the data centre people.
(g) If it is not having warranty, we have to solve the problem by our own or re-agreement to
extend the warranty and solve that problem.
28. How to replace the processor?
(a) Check the processor's status using # lscpu or # dmidecode -t processor commands.
(b) If it shows any errors then we have to replace the processor.
(c) Then raise the case to vendor by toll free no. with higher authorities permission.
(d) The vendor will give case no. for future references.
(e) They also asks rack no. system no. of the data centre for processor replacement.
(f) We will inform to the Data centre people to co-ordinate with vendor.
29. How replace the failed memory modules?
Causes :
(a) The system is continuously rebooting .
(b) When in peak business hours, if the heavy applications are running the system get panic
and rebooted. This is repeating regularly.
Solution :
(a) First we check how much RAM present in the system with # dmidecode -t memory
command.
(b) Then we raise the case to vendor with the help of higher authorities.
(c) Then the vendors will provide the case no. for future reference.
(d) They will also asks rack no. system no. to replace the memory.
(e) we will inform the data centre people to co-ordinate with the vendor.
30. What is your role in DB patching?
In Database patching the following teams will be involved.
(i) Database Administrator (DBA) team.
(ii) Linux Administrators team.
(iii) Monitoring team.
(iv) Application team.
(i) DBA team :
This is the team to apply the patches to the databases.
(ii) Linux team :
This team is also involved if any problems occur. If the database volume is having a mirror we
should first break the mirror and then the DBA people will apply the patches. After 1 or 2 days
there is no problem again we need sync the data between mirrored volume to patch applied
volume. If there is no space for patch we have to provide space to DBA team.
(iii) Monitoring team :
This team should receive requests or suggestions to ignore any problems occurs. After applied
the patch if the system is automatically rebooted then monitoring team will raise the ticket
"Node down" to system administrators team. So, to avoid those type of tickets we have to
sent requests to ignore those type alerts.
(iv) Application team :
For applying any patches, the databases should not be available to application. So, if
suddenly database is not available then application may be crashed. So, first the application
should be stopped. This will be done by application team.
31. What is SLA?
A service-level agreement (SLA) is simply a document describing the level of service expected by a
customer from a supplier, laying out the metrics by which that service is measured and the remedies or
penalties, if any, should the agreed-upon levels not be achieved. Usually, SLAs are between companies and
external suppliers, but they may also be between two departments within a company .
32. What is Problem Management?
The objective of Problem Management is to minimize the impact of problems on the organisation.
Problem Management plays an important role in the detection and providing solutions to problems
(work around& known errors) and prevents their reoccurrence.
A 'Problem' is the unknown cause of one or more incidents, often identified as a result of multiple
similar
incidents.A 'Known error' is an identified root cause of a Problem.
33. What is Incident Management?
An 'Incident' is any event which is not part of the standard operation of the service and which causes
or may cause, an interruption or a reduction of the quality of the service.
The objective of Incident Management is to restore normal operations as quickly as possible with the
least possible impact on either the business or the user, at a cost-effective price.
Inputs for Incident Management mostly come from users, but can have other sources as well like
management Information or Detection Systems. The outputs of the process are RFC’s (Requests for
Changes), resolved and closed Incidents, management information and communication to the customer.
34. What is Change Management?
Change management is a systematic approach to dealing with change, both from the perspective of
an organization and on the individual level. change management has at least three different aspects,
including adapting to change, controlling change, and effecting change. A proactive approach to
dealing with change is at the core of all three aspects.In an information technology (IT) system environment,
change management refers to a systematic approach to keeping track of the details of the system (for
example, what operating system release is running on each computer and which fixes have been applied).
35. What is Request Management?
service request management (SRM) is the underlying workflow and processes that enable an IT
procurement or service request to be reliably submitted, routed, approved, monitored and delivered. SRM is
the process of managing a service request through its lifecycle from submission through delivery and follow-
up.
SRM may be manual or automated. In a manual system, a user calls a help desk to request a service,
and help desk personnel create a service ticket to route the service request. In an automated system,
the user submits a request through an online service catalog, and the application software
automatically routes the request through the appropriate processes for approval and service delivery. These
systems also typically enable users to track the status of their service requests, and management to monitor
service delivery levels for quality control purposes.
36. What is grep?
(i) grep means Globally search for Regular Expression.
(ii) Using grep we can filter the results to get a particular information.
(iii) We can get only information about what string we have specified in grep command.
37. What are pipes and filters in Linux?
Pipes :
(a) Pipes are nothing but adding two commands and make as one command.
(b) Normally we cannot combine two commands, but using pipes we get one command by
combining two commands.
(c) So, we can get the results as what we required.
Filters :
(a) Filters are nothing but filtering the results what we required.
(b) Using filters we can get exact results depends upon what we specified in the expression.
(c) So, there is no wastage of time because it filters results what we specified in the command
expression.
38. What is the full form of COMPUTER ?
C ----->Commonly
O -----> Operated
M -----> Machine
P -----> Particularly
U ----->Used
T ----->Technical and
E ----->Educational
R -----> Research
39. What is the command in sar to monitor NIC devices received/transmitted packets?
# sar -n DEV 1 5
This will show 5 consecutive output each with a time interval of 1 sec for all the ethernet devices
40. What is Linux Kernel?
It acts as an interpreter between Linux OS and its hardware. It is the fundamental component of Linux
OS and contains hardware drivers for the devices installed on the system. The kernel is a part of the system
which loads first and it stays on the memory.
41. What are the main parameters effect on server performance?
The one of the most important task of any Linux Admin includes performance monitoring which
includes a parameter "Load Average" or "CPU Load".
4
So the no. of cores would be 2x4 = 8 cores.
48. What do you understand the Load Average?
If the number of active tasks utilizing CPU is less as compared to available CPU cores then the load
average can be considered normal but if the no. of active tasks starts increasing with respect to available
CPU cores then the load average will start rising.For example,
# uptime
00:43:58 up 212 days, 14:19, 4 users, load average: 6.07, 7.08, 8.07
49. How to check all the current running services in Linux?
To find the status of any single service :
# service vsftpd status
vsftpd (pid 5909) is running...
To get the status of all the running services :
# service --status-all | grep running
acpid (pid 5310) is running...
atd (pid 6528) is running...
auditd (pid 5012) is running...
Avahi daemon is not running
Avahi DNS daemon is not running
The Pegasus CIM Listener is running.
The Pegasus CIM Object Manager is running.
crond (pid 6242) is running...
dcerpcd (pid 5177) is running...
eventlogd (pid 5223) is running...
In case you don't use grep you will be able to see all the services on your machine :
# service --status-all
NetworkManager is stopped
acpid (pid 5310) is running...
anacron is stopped
atd (pid 6528) is running...
auditd (pid 5012) is running...
automount is stopped
Avahi daemon is not running
Avahi DNS daemon is not running
hcid is stopped
sdpd is stopped
You can also check the active ports along with their services using :
# netstat -ntlp
Active Internet connections (only servers)
Protocol Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp 0 0 0.0.0.0:52961 0.0.0.0:* LISTEN
5223/eventlogd
tcp 0 0 0.0.0.0:5988 0.0.0.0:* LISTEN
6116/cimserver
tcp 0 0 0.0.0.0:5989 0.0.0.0:* LISTEN
6116/cimserver
tcp 0 0 0.0.0.0:678 0.0.0.0:* LISTEN
5160/rpc.statd
tcp 0 0 0.0.0.0:14247 0.0.0.0:* LISTEN
6460/java
tcp 0 0 127.0.0.1:199 0.0.0.0:* LISTEN
5857/snmpd
tcp 0 0 0.0.0.0:135 0.0.0.0:* LISTEN
5177/dcerpcd
50. How do you check Linux machine is Physical or Virtual remotely?
There is no hard and fast rule to check whether the machine is physical or virtual but still we do have
some commands which can be used for the same purpose.
The command used to view all the required hardware related information for any Linux machine is
# dmidecode
But the output would be very long and hard to find out the specific details looking for. So, let's narrow
it down.
Physical Servers:
# dmidecode -s system-product-name
System x3550 M2 -[7284AC1]-
Now to get more details about the system
# dmidecode | less (And search for "System Information")
System Information
Manufacturer: IBM
Product Name: System x3550 M2 -[7284AC1]-
Version: 00
Wake-up Type: Other
SKU Number: XxXxXxX
Family: System x
Virtual Servers :
# dmidecode -s system-product-name
VMware Virtual Platform
# dmidecode | less
System Information
Manufacturer: VMware, Inc.
Product Name: VMware Virtual Platform
Version: None
Wake-up Type: Power Switch
SKU Number: Not Specified
Family: Not Specified
On a virtual server running VMware you can run the below command to verify :
# lspci | grep -i vmware
00:0f.0 VGA compatible controller: VMware SVGA II Adapter
51. How to find the bit size of your linux machine?
# uname -m
i686
# uname -m
x86_64
If we get i386, i586 and i686 that signifies your machine is 32-bit but if we
getx86_64 or ia64 then your machine will be 64-bit.
# getconf LONG_BIT
32
# getconf LONG_BIT
64 (Here we get an output of bit size either 32 or 64)
52. How can you add a banner or login message in Linux?
By editing these two files
/etc/issue
/etc/motd
53. What is the difference between normal kernel and kernel-PAE?
kernel in 32 bit machine supports max of 4 GB RAM, whereas
(a) Put the RHEL - 6 DVD into the DVD drive and go to Packages directory.
# cd /media/RHEL6/Packages
(b) Install the vsftpd package to configure the FTP server.
# rpm -ivh vsftpd*
(c) Copy the entire RHEL - 6 DVD contents into the /var/ftp/pub/rhel6 directory.
# cp -rvpf /media/RHEL6/*.* /var/ftp/pub/rhel6
(d) Restart, enable the ftp service at next boot, add the service to IP tables and restart the IP
tables.
# service vsftpd restart
# chkconfig vsftpd on
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --deport 21 -j ACCEPT
# iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --deport 21 -j
ACCEPT
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --deport 20 -j ACCEPT
# iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --deport 20 -j
ACCEPT
# service iptables save
# service iptables restart
# chkconfig iptables on
(e) Configure the network as static by # setup command and restart the network and
NetworkManager.
(f) Configure the yum server.
# vim /etc/yum.repos.d/linux.repo
[linux]
name=Linux yum server
baseurl=ftp://172.25.9.11/pub/rhel6 (Specify the
FTP server IP address)
gpgcheck=0
enabled=1
(save and exit the file)
# yum clean all
# yum repolist
(g) Configure the DHCP server.
# yum install dhcp* -y
# cp -rvpf /usr/share/doc/dhcp-4.1.1/dhcpd.conf.sample /etc/dhcp/dhcpd.conf
# vim /etc/dhcp/dhcpd.conf
Go to line number 47 and edit the line as below.
subnet 172.25.9.0 netmask 255.255.255.0 {
range 172.25.9.50 172.25.9.200;
* comment on next two lines
option routers 172.25.9.11;
option broadcast-address 172.25.9.255;
default-lease-time 600;
max-lease-time 7200;
allow booting;
allow bootp;
next-server 172.25.9.11;
filename "Pxelinux.0";
authoritative;
(save and exit this file)
# service dhcpd restart
# chkconfig dhcpd on
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --deport 67 -j ACCEPT
# iptables -A OUTPUT -m state --state NEW -m tcp -p tcp --deport 68 -j
ACCEPT
# iptables -A INPUT -m state --state NEW -m tcp -p tcp --deport 67 -j ACCEPT
(i) Switchover is the manual task. (i) But, Failover is a automatic task.
(ii) We can switchover service groups from online (ii) But, the failover will failover the service group to
cluster node to offline cluster node incase of the other node when Veritas Cluster heartbeat
power outage, hardware failure, schedule linkdown, damaged, broken because of some
shutdown and reboot. disaster or system hung.
7. Which the main configuration file for VCS (Veritas Cluster) and where it is stored?
' main.cf ' is the main configuration file for VCS and it is located in /etc/VRTSvcs/conf/config
directory.
8. What is the public region and private region?
when we bring the disk from O/S control to Volume Manager control in any format (either CDS,
simple or sliced), the disk is logically divided into two parts.
(a) Private region :
It contains Veritas configuration information like disk type and name, disk group name,
groupid and configdb. The default size is 2048 KB.
(b) Public region :
It contains the actual user's data like applications, databases and others.
9. There are five disks on VxVM (Veritas Volume Manager) and all are failed. What are the steps
you follow to get those disks into online?
(i) Check the list of disks in Volume manager control by # vxdisk list command.
(ii) If the above disks are not present, then bring them O/S control to VxVM control by
# vxdisksetup -i <disk names> (if data is not on those disk) or execute
# vxdiskadm command and select 2nd option ie., encapsulation method if the disks
having the data.
(iii) Even though If it is not possible, then check the disks are available at O/S level by # fdisk -
l command.
(a) If the disks are available, execute the above command once again.
(b) If the disks are not available then recognize them by scanning the hardware.
(iv) Even though if it is not possible, then reboot the system and follow the steps (i) and (ii).
10. What is the basic difference between private disk group and shared disk group?
Private disk group :
The disk group is only visible for the host on which we have created it. If the host is a part of the
cluster, the private disk group will not be visible to the other cluster nodes.
Shared disk group :
The disk group is sharable and visible to the other cluster nodes.
11. How will you create private disk group and shared disk group?
# vxdg init <disk group name><disk media name>=<O/S disk name> (to create the
private disk group)
# vxdg -s init <disk group name><disk media name>=<O/S disk name>(to create the shared disk
group)
12. How will you add new disk to the existing disk group?
we can do this in two ways.
(i) Run # vxdiskadm command, which will open menu driven program to do various disk
operations. Select add disk option and give disk group name and disk name.
(ii) # vxdg -g <disk group name> adddisk <disk media name>=<O/S disk name>
Example: # vxdg -g appsdg adddisk disk02=/dev/sdb
13. How will you grow or shrink the volume/file system? What is the meaning of grow by, grow to,
shrink by and shrink to options?
(i) We can grow the volume/file system by,
# vxassist -g appsdg growby or growto 100GB appsvol (or)
# vxresize -g appsdg +100GB appsvol alloc = <disk name>
(ii) We can shrink the volume/file system by,
# vxassist -g appsdg shrinkby 20GB appsvol
# vxassist -g appsdg shrinkto 20GB appsvol (or)
# vxresize -g appsdg -10GB appsvol (to shrink by the size 10GB)
# vxresize -g appsdg 10GB appsvol (to shrink to the size 10GB)
Meanings :
growby :
This will be used to grow the file system by adding new size to the existing file system.
growto :
This will be used to grow the file system upto the specified new size. This will not be added
the new size to the existing one.
shrinkby :
This will be used to shrink the file system by reducing the new size from the existing file
system size.
shrinkto :
This will be used to shrink the file system upto the specified new size. This will not be reduced
the file system new size from the existing one.
14. If vxdisk list command gives you disk status as " error ". What are the steps you follow to make
respective disk online?
This issue is mainly because of fabric disconnection. So, execute # vxdisk scandisks command.
Otherwise unsetup the disks using # /etc/vx/bin/vxdiskunsetup and setup the disks again using
# /etc/vx/bin/vxdisksetup command.
Note :/etc/vx/bin/vxdiskunsetup will remove the private region from the disk and destroy the data. So,
backup the data before using this command.
18. Define LLT and GAB. What are the commands to create them?
LLT :
(i) LLT means Low Latency Transport protocol
(ii) It monitor the kernel to kernel communication.
(iii) It maintain and distribute the network traffic within the cluster.
(iv) It uses heartbeat between the interfaces.
GAB :
(i) GAB means Global Atomic Broadcasting.
(ii) It maintain and distribute the configuration information of the cluster.
(iii) It uses heartbeat between the disks.
Commands :
# gabconfig -a (to check the status of the GAB, ie., GAB
is running or not)
If port ' a ' is listening, means GAB is running, otherwise GAB is not running.
If port ' b ' is listening, means I/O fencing is enabled, otherwise I/O fencing is
disabled.
If port ' h ' is listening means had deamon is working, otherwise had deamon is
not working.
# gabconfig -c n 2 (to start the GAB in 2 systems in the cluster,
where 2 is seed no.)
# gabconfig -u (to stop the GAB)
# cat /etc/gabtab (to see the GAB configuration information and
the it contains as, )
gabconfig -c n x (where x is a no. ie., 1, 2, 3, ....etc.,)
# lltconfig -a (to see the status of the llt)
# lltconfig -c (to start the llt)
# lltconfig -u (to stop the llt)
# lltstat -nvv (to see the traffic status between the interfaces)
# llttab -a (to see the cluster ID)
# haclus -display (to see all the information on the cluster)
# cat /etc/llttab (to see the llt configuration and the entries are as,)
Cluster ID, host ID, interface MAC address, ...etc.,
# cat /etc/llthosts (to see the no. of nodes present in the cluster)
19. How to check the status of the Veritas Cluster?
# hastatus -summary
20. Which command is used to check the syntax of the main.cf?
# hacf -verify /etc/VRTSvcs/conf/config
21. How will you check the status of the individual resources of Veritas Cluster (VCS)?
# hares -state <resource name>
22. What is the use of # hagrp command?
# hagrp command is used doing administrative actions on service groups like, on-line service group,
off-line service group and switch, ...etc.,
23. How to switch over the service group?
# hagrp -switch <System A><System B>
24. How to online the service group in VCS?
# hagrp -online <service group name> -sys <System A>
25. What are the steps to follow for switch over the application from System A to System B?
(i) First unmount the file system on System A.
(ii) Stop the volume on System A.
(iii) Deport the disk group from System A.
(iv) Import the disk group to another System B.
(v) Start the volume on System B.
(vi) Finally mount the file system on System B.
26. How many types of clusters available?
(i) Hybrid Cluster.
(ii) Parallel Cluster.
(iii) Failover Cluster.
27. What is meant by seeding?
Normally, we will define how many nodes to start in a cluster while booting or explicitly by executing
# gabconfig -c n 2 command. Here 2 means 2 seeds to start in a cluster. This no. is called
seeding.
28. What is Split brain issue in VCS and how to resolve this?
A Split brain issue means, multiple systems use the same exclusive resources and usually resulting in
data corruption.
Normally VCS is configured with multiple nodes and are communicates with each other. When
power loss or system crashed, the VCS assumes the system has failed and trying to move service group to
other system to maintain high availability. However communication (heartbeat) can also failed due to
network failures.
If network traffic (connection) between any two groups of systems fail simultaneously, a network
partition occurs. When this happen, systems on both sides of the partition can restart the applications
from the other side, ie., resulting in duplicate services. So, the most serious problem caused by this and
effects the data on shared disks.
This split brain issue normally occurs in VCS 3.5 to VCS 4.0 versions. But, from VCS 5.0 onwards the
I/O fencing (new feature) is introduced to minimize the split brain issue. If I/O fencing is enabled in a
cluster, then we can avoid the split brain issue.
29. What is Admin wait and Stale Admin wait?
ADMIN-WAIT :
If VCS is started on system with a valid configuration file and other systems are in the ADMIN-WAIT
state, The new system transition to the ADMIN-WAIT state (or)
If VCS is started on system with a stale configuration file and if other systems are in the ADMIN-
WAIT state, the new system transition to the ADMIN-WAIT state.
STALE-ADMIN-WAIT :
The configuration files are in read-only mode. If any changes wants to make that file as read-write
mode. If any changes occurs in ' main.cf ' file in cluster, then the changes are in ' .stale ' hidden file
under configuration directory. While changes occurring, if the system restarted or rebooted, then the
cluster will start with ' .stale ' file. So, the VCS is started on a system with a stale configuration file, the
system status will be STALE- ADMIN-WAIT until another system in the cluster starts with a valid
configuration file or otherwise execute
# hasys -stale -force <system name> (or) # hasys -force <system name> to start the
system forcefully with the correct or valid configuration file.
30. What is meant by resource and how many types?
Resource is a software or hardware component managed by the VCS.
Mount points, disk groups, volumes, IP addresses, ....etc., are the Software components.
Disks, Interfaces (NIC cards), ....etc., are the Hardware components.
There are two types of resources and they are,
(i) Persistent Resources (we can put them either on-line or off-line)
(ii) Non-Persistent Resources (we can put off-line only)
If the resource is in faulted state, then clear the service group state. Resources cab be critical or non-
critical. If the resource is critical, then it automatically failover if the resource is failed. If the resource is
Non-critical, then it switch over and we have to manually switch over the resource group to another
available system.
31. What are the dependencies between resources in a Cluster?
If one resource depends on other resource, then there is a dependency between those resources.
Example : NIC (Network Interface Card) is hardware component nothing but hardware resource. The
IP address is a software component nothing but software resource and it depends on NIC card. The
relationship between NIC and IP address is Parent - Child relationship. The resource which one is
starts first, that one is called Parentresource and the remaining dependency resources are called Child
resource.
32. What are the minimum requirements for or in VCS?
(i) Minimum two identical (same configuration) systems.
(ii) Two switches (Optical Fibre Channel).
(iii) Minimum three NIC cards. (Two NICs for private network and one NIC for public network).
(iv) One common storage.
(v) Veritas Volume Manager with license.
(vi) Veritas Cluster with license.
33. What are the Veritas Cluster deamons?
(i) had :
(a) It is the main deamon in Veritas Cluster for high availability.
(b) It monitors the cluster configuration and whole cluster environment.
* If the volume is created for cluster, don't put the entry in /etc/fstab file.
(xi) And finally send the mail to client or requested person
43. What is the difference between Global Cluster and Local Cluster? Have you configured the Global
Cluster?
Local Cluster :
If all the nodes in a Cluster are placed in a same location, that Cluster is called Local Cluster.
Global Cluster :
If all the nodes in a Cluster are placed in different Geological locations, that Cluster is called Global
Cluster. The main advantage of global cluster is high availability when Natural Calamities or disasters
occurs.
Attributes :
# hagrp -modify appssg system list={ sys A0, sys B0} (to add sys A and sys B attributes to
service group)
# hagrp -modify appssg autostart list={ sys A} (to start the sys A attributes
automatically)
# hagrp -modify appssg enabled 1 or 0 (1 means start and 0 means not to start
automatically)
(iii) Creating resources and adding them to the service group and specify their attributes.
For file system :
(a) /mnt/apps (the mount point)
(b) /appsvol (the volume name)
(c) /appsdg (the disk group)
# hares -add dg-apps diskgroup appssg (to add the diskgroup resource
to a service group)
(where dg-apps is resource name, diskgroup is a keyword and appssg is a service
group name)
# hares -modify dg-apps diskgroup appsdg (to add the diskgroup attribute to a
service group)
# hares -modify dg-apps enable 1 (to enable the resource)
# hares -add dg-volume volume appssg (to add the volume resource to
a service group)
# hares -modify dg-volume volume appsvol (to add the volume attribute to a service
group)
# hares -modify dg-volume diskgroup appsdg (to add the diskgroup to the volume)
# hares -modify dg-volume enable 1 (to enable the volume resource)
# hares -modify dg-volume critical 1 (to make the resource as critical)
# hares -add dg-mnt mount appssg (to add the mount point resource to a
service group)
# hares -modify dg-mnt blockdevice=/dev/vx/rdsk/appsdg/appsvol (to add the block
device resource
to a service group)
# hares -modify dg-mnt fstype=vxfs (to add the mount point attributes to a
service group)
# hares -modify dg-mnt mount=/mnt/apps (to add the mount point
directory attribute to a
service group)
# hares -modify dg-mnt fsckopt=% y or %n (to add the fsck attribute either yes or
no to
service group)
(iv) Create links between the above diskgroup, volume and mount point resources.
# hares -link parent-res child-res
# hares -link dg-appdg dg-volume
# hares -link dg-volume dg-mnt
47. What is meant by freezing and unfreezing a service group with persistent and evacuate options?
Freezing :
If we want to apply patches to the system in a cluster, then we have to freeze the service group
because first stop the service group, if it is critical, the service group will move automatically to another
system in Cluster. So, we don't want to move the service group from one system to another system,
we have to freeze the service group.
Unfreeze :
After completing the task, the service group should be unfreezed because, if the is crashed or down
and the resources are critical, then the service group cannot move from system 1 to system 2 due to
freezed the service group and results in not available of application. If unfreezed the service group after
maintenance, the service group can move from system 1 to system 2. So, if system 1 failed, the
system2 is available and application also available.
Persistent option :
If the service group is freezed with persistent option, then we can stop or down or restart the
system. So, there is no loss of data and after restarted the system, the service group is remains in
freezed state only.
Example : # hasys -freeze -persistent <system name>
# hasys -unfreeze -persistent <system name>
Evacuate :
If this option is used in freezed service group system, if the system down or restarted the persisted
information is evacuated, ie., before freeze all the service groups should be moved from system 1
to another system 2.
48. What are the layouts are available in VxVM and how they will work and how to configure?
(i) There are 5 layouts available in VxVM. They are RAID-0, RAID-1, RAID-5, RAID-0+1 and
RAID-1+0.
RAID-0 :
We can configure RAID-0 in two ways.
(a) Stripped (default).
(b) Concatenation.
Stripped :
(i) In this minimum two disks required to configure.
(ii) In this the data will write on both the disks parallelly. ie., one line in one disk and 2nd line on
2nd disk, ...etc.,
(iii) In this the data writing speed is fast.
(iv) In this there is no redundancy for data.
Concatenation :
(i) In this minimum one disk is required to configure.
(ii) In this the data will write in first disk and after filling of first disk then it will write on 2nd disk.
(iii) In this the data writing speed is less.
(iv) In this also there is no redundancy for data.
RAID-1 :
(I) It is nothing but mirroring.
(ii) In this minimum 4 disks are required to configure.
(iii) In this same data will be written on disk1 and disk 3, disk 2 and disk4.
(iv) If disk 1 failed, then we can recover the data from disk3 and if disk 2 failed, then we can
recover the data from disk 4. So, there is no data loss or we can minimize the data loss.
(v) In this half of the disk space may be wasted.
RAID-5 :
(i) It is nothing but stripped with distributed parity.
(ii) In this minimum 3 disks required to configure.
(iii) In this one line will write on disk 1 and 2nd line write on disk 2 and the parity bit will write
on disk3. The parity bit will write on 3 disk simultaneously. If disk 1 failed then we can recover the
data from disk2 and parity bit from disk 3. So, in this data will be more secured.
(iv) In this disk utilization is more when compared to RAID-1, ie., 1/3 rd of disk space may be
wasted.
(v) This RAID-5 will be configured for critical applications like Banking, Financial, SAX and
Insurance...etc., because the data must be more secured.
Creating a volume with layout :
# vxassist -g <diskgroup name> make <volume name><size in GB/TB> layout=<mirror/raid
5/raid 1>
Example : # vxassist -g appsdg make appsvol 50GB layout=raid 5 (the default is RAID-5
in VxVM)
Logs :
* If the layout is mirror, then log is DRL.
* If the layout is RAID-5, then the log is RAID-5 log.
* The main purpose of the log is fast recovery operation.
* We have to specify whether the log is required or not in all types of layouts except RAID-5
because the logging is default in RAID-5.
License :
(i) All the licenses are stored in /etc/vx/licenses directory and we can take backup of this
directory and restore it back, if we need reinstall the server.
(ii) Removing VxVM package will not remove the installed license.
(iii) To install license # vxlicinst command is used.
(iv) To see the VxVM license information by # vxlicrep command.
(v) To remove the VxVM license by # vxkeyless set NONE command.
(vi)The license packages are installed in /opt/VRTSvlic/bin/vxlicrep directory.
(vii) The license keys are stored in /etc/vx/licenses/lic directory.
(viii) We can see the licenses by executing the below commands,
# cat /etc/vx/licenses/lic/key or
# cat /opt/VRTSvlic/bin/vxlicrep | grep "License key"
(ix) To see the features of license key by # vxdctl license command.
Version :
(i) We are using VxVM6.2 version.
(ii) to know the version of VxVM by # rpm -qa VRTSvxvm command.
54. What are the available formats to take the control of disks from O/S to veritas in VxVM?
We can take the control of disks from O/S to veritas in 3 formats.
(i) CDS (Cross platform Data Sharing and the default format in VxVM).
(ii) Sliced.
(iii) Simple.
(i) CDS :
(a) We can share the data between different Unix flavours.
(b) The private and public both regions are available in 7th partition.
(c) The entire space is in 7th partition.
(d) So, there is a chance to loss the data because, if the disk is failed ie., partition 7 is
corrupted or damaged then the data may be lost.
(e) This is the default in veritas volume manager.
(ii) Sliced :
(a) It is always used for root disk only.
(b) In this format we cannot share the data between different Unix flavours. Normally sliced
is used for root disk and cds is used for data.
(c) Private region is available at 4th partition and public region is available at 3rd partition.
(d) So, if public region is failed, we can recover the data from private region ie., minimizing
the data loss.
(iii) Simple :
(a) This format is not using widely now because, it is available in old VxVM 3.5
(b) In this private and public regions are available at 3rd partition.
Specifying the format while setup :
# vxdisksetup -i /dev/sda (to setup the disk and this is default format ie., CDS
format)
# vxdisksetup -i /dev/sdb format =<sliced / simple> (to specify sliced or
simple format)
55. In how many ways can we manage VxVM?
(I) Command line tool.
(ii) GUI (vea tool)
(iii) # vxdiskadm command (it gives the options to manage the disks)
30. RedHat Cluster
1. How can you define a cluster and what are its basic types?
A cluster is two or more computers (called nodes or members) that work together to perform a task.
There are four major types of clusters:
Storage
High availability
Load balancing
High performance
2. What is Storage Cluster?
Storage clusters provide a consistent file system image across servers in a cluster, allowing the servers to
simultaneously read and write to a single shared file system.
A storage cluster simplifies storage administration by limiting the installation and patching of applications to
one file system.
The High Availability Add-On provides storage clustering in conjunction with Red Hat GFS2
3. What is High Availability Cluster?
High availability clusters provide highly available services by eliminating single points of failureand by failing
over services from one cluster node to another in case a node becomes inoperative.
Typically, services in a high availability cluster read and write data (via read-write mounted file systems).
A high availability cluster must maintain data integrity as one cluster node takes over control of a service from
another cluster node.
Node failures in a high availability cluster are not visible from clients outside the cluster.
High availability clusters are sometimes referred to as failover clusters.
4. What is Load Balancing Cluster?
Load-balancing clusters dispatch network service requests to multiple cluster nodes to balance the request load
among the cluster nodes.
Load balancing provides cost-effective scalability because you can match the number of nodes according to
load requirements. If a node in a load-balancing cluster becomes inoperative, the load-balancing software
detects the failure and redirects requests to other cluster nodes.
Node failures in a load-balancing cluster are not visible from clients outside the cluster.
Load balancing is available with the Load Balancer Add-On.
5. What is a High Performance Cluster?
High-performance clusters use cluster nodes to perform concurrent calculations.
A high-performance cluster allows applications to work in parallel, therefore enhancing the performance of the
applications.
High performance clusters are also referred to as computational clusters or grid computing.
6. How many nodes are supported in Red hat 6 Cluster?
A cluster configured with qdiskd supports a maximum of 16 nodes. The reason for the limit is because
of scalability; increasing the node count increases the amount of synchronous I/O contention on the
shared quorum disk device.
8. What is the order in which you will start the Red Hat Cluster services?
In Red Hat 4 :
# service ccsd start
# service cman start
# service fenced start
service clvmd start (If CLVM has been used to create clustered volumes)
# service gfs start
# service rgmanager start
In RedHat 5 :
# service cman start
# service clvmd start
# service gfs start
# service rgmanager start
In Red Hat 6 :
# service cman start
# service clvmd start
# service gfs2 start
# service rgmanager start
9. What is the order to stop the Red Hat Cluster services?
In Red Hat 4 :
# service rgmanager stop
# service gfs stop
# service clvmd stop
# service fenced stop
# service cmanstop
# service ccsd stop
In Red Hat 5 :
# service rgmanager stop
# servicegfsstop
# service clvmd stop
# servicecman stop
In Red Hat 6 :
# service rgmanagerstop
# service gfs2 stop
# service clvmdstop
# service cman stop
10. What are the performance enhancements in GFS2 as compared to GFS?
Better performance for heavy usage in a single directory
Faster synchronous I/O operations
Faster cached reads (no locking overhead)
Faster direct I/O with preallocated files (provided I/O size is reasonably large, such as 4M blocks)
Faster I/O operations in general
Faster Execution of the df command, because of faster statfs calls
Improved atime mode to reduce the number of write I/O operations generated by atime when compared with
GFS
GFS2 supports the following features.
extended file attributes (xattr)
the lsattr() and chattr() attribute settings via standard ioctl() calls
nanosecond timestamps
GFS2 uses less kernel memory.
GFS2 requires no metadata generation numbers.
Allocating GFS2 metadata does not require reads. Copies of metadata blocks in multiple journals are managed
by revoking blocks from the journal before lock release.
GFS2 includes a much simpler log manager that knows nothing about unlinked inodes or quota changes.
The gfs2_grow and gfs2_jadd commands use locking to prevent multiple instances running at the same time.
The ACL code has been simplified for calls like creat() and mkdir().
Unlinked inodes, quota changes, and statfs changes are recovered without remounting the journal.
11. What is the maximum file system support size for GFS2?
GFS2 is based on 64 bit architecture, which can theoretically accommodate an 8 EB file system.
However, the current supported maximum size of a GFS2 file system for 64-bit hardware is 100 TB.
The current supported maximum size of a GFS2 file system for 32-bit hardware for Red Hat Enterprise Linux
Release 5.3 and later is 16 TB.
NOTE: It is better to have 10 1TB file systems than one 10TB file system.
12. What is the journaling filesystem?
A journaling filesystem is a filesystem that maintains a special file called a journal that is used to repair any
inconsistencies that occur as the result of an improper shutdown of a computer.
In journaling file systems, every time GFS2 writes metadata, the metadata is committed to the journal before it
is put into place.
This ensures that if the system crashes or loses power, you will recover all of the metadata when the journal is
automatically replayed at mount time.
GFS2 requires one journal for each node in the cluster that needs to mount the file system. For example, if you
have a 16-node cluster but need to mount only the file system from two nodes, you need only two journals. If
you need to mount from a third node, you can always add a journal with the gfs2_jadd command.
13. What is the default size of journals in GFS?
When you run mkfs.gfs2 without the size attribute for journal to create a GFS2 partition, by default a
128MB sizejournal is created which is enough for most of the applications
In case you plan on reducing the size of the journal, it can severely affect the
performance. Suppose you reduce the size of the journal to 32MB it does not take much file system activity to
fill an 32MB journal, and when the journal is full, performance slows because GFS2 has to wait for writes to the
storage.
14. What is a Quorum Disk?
Quorum Disk is a disk-based quorum daemon, qdiskd, that provides supplemental heuristics to determine node
fitness.
With heuristics you can determine factors that are important to the operation of the node in the event of a
network partition
For a 3 node cluster a quorum state is present until 2 of the 3 nodes are active i.e. more than half. But
what if due to some reasons the 2nd node also stops communicating with the 3rd node? In that case under a
normal architecture the cluster would dissolve and stop working. But for mission critical environments
and such scenarios we use quorum disk in which an additional disk is configured which is mounted on
all the nodes with qdiskd service running and a vote value is assigned to it.
So suppose in above case I have assigned 1 vote to qdisk so even after 2 nodes stops communicating
with 3rd node, the cluster would have 2 votes (1 qdisk + 1 from 3rd node) which is still more than half
of vote count for a 3 node cluster. Now both the inactive nodes would be fenced and your 3rd node
would be still up and running being a part of the cluster.
15. What is rgmanager in Red Hat Cluster and its use?
This is a service termed as Resource Group Manager
RGManager manages and provides failover capabilities for collections of cluster resources called services,
resource groups, or resource trees
it allows administrators to define, configure, and monitor cluster services. In the event of a node failure,
rgmanager will relocate the clustered service to another node with minimal service disruption.
16. What is luci and ricci in Red Hat Cluster?
luci is the server component of the Conga administration utility
Conga is an integrated set of software components that provides centralized configuration and management of
Red Hat clusters and storage
luci is a server that runs on one computer and communicates with multiple clusters and computers via ricci
storage fencing — A fencing method that disables the Fibre Channel port that connects storage to an
inoperable node.
Other fencing — Several other fencing methods that disable I/O or power of an inoperable node,
including IBM Bladecenters, PAP, DRAC/MC, HP ILO, IPMI, IBM RSA II, and others.
25. What are the lock states in Red Hat Cluster?
A lock state indicates the current status of a lock request. A lock is always in one of three states:
Granted — The lock request succeeded and attained the requested mode.
Converting — A client attempted to change the lock mode and the new mode is incompatible with an
existing lock.
Blocked — The request for a new lock could not be granted because conflicting locks exist.
A lock's state is determined by its requested mode and the modes of the other locks on the same
resource.
26. What is DLM lock model?
DLM is a short abbreviation for Distributed Lock Manager.
A lock manager is a traffic cop who controls access to resources in the cluster, such as access to a GFS file
system.
GFS2 uses locks from the lock manager to synchronize access to file system metadata (on shared storage)
CLVM uses locks from the lock manager to synchronize updates to LVM volumes and volume groups (also on
shared storage)
In addition, rgmanager uses DLM to synchronize service states.
without a lock manager, there would be no control over access to your shared storage, and the nodes in the
cluster would corrupt each other's data.
The top program provides a dynamic real-time view of a running system. It can display system summary
information as well as a list of tasks currently being managed by the Linux kernel. The types of system
summary information shown and the types, order and size of information displayed for tasks are all user
configurable and that configuration can be made persistent across restarts.
1. Without any arguments :
# top
top - 17:51:07 up 1 day, 2:56, 27 users, load average: 5.33, 29.71, 28.33
Tasks: 1470 total, 1 running, 1469 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264114424k total, 253006956k used, 11107468k free, 66964k buffers
Swap: 33554424k total, 3260k used, 33551164k free, 245826024k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1960 deepak 15 0 30452 3220 1540 R 2.3 0.0 0:00.78 top
2457 root 11 -5 0 0 0 S 2.3 0.0 11:36.93 kacpid
2493 pmartprd 16 0 1397m 289m 9.8m S 0.3 0.1 18:36.07 pmrepagent
4639 pmartprd 15 0 787m 54m 4080 S 0.3 0.0 5:19.55 pmserver
14402 root RT 0 151m 5256 2872 S 0.3 0.0 1:41.40 multipathd
17886 root 10 -5 0 0 0 S 0.3 0.0 0:07.41 kondemand/11
Generally we use top without any arguments, but the magic is mostly done from the top command
line which must of us skip. Well before taking you to that part let me explain you the various system
related features which are shown by top command.
NOTE: You can enable or disable the marked blue line by pressing "l" once top is running.
top - 17:51:07 up 1 day, 2:56, 27 users, load average: 5.33, 29.71, 28.33
Tasks: 1470 total, 1 running, 1469 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264114424k total, 253006956k used, 11107468k free, 66964k buffers
Swap: 33554424k total, 3260k used, 33551164k free, 245826024k cached
Explanation: This line tells you about the uptime of your system along with load average value.
NOTE: You can enable/disable the marked blue line by pressing "t".
top - 17:51:07 up 1 day, 2:56, 27 users, load average: 5.33, 29.71, 28.33
Tasks: 1470 total, 1 running, 1469 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264114424k total, 253006956k used, 11107468k free, 66964k buffers
Swap: 33554424k total, 3260k used, 33551164k free, 245826024k cached
Explanation: This line gives us a brief detail of all the tasks running/sleeping/stopped currently in the
system along with the CPU Usage
Value Meaning
us user cpu time (or) % CPU time spent in user space
sy system cpu time (or) % CPU time spent in kernel space
ni user nice cpu time (or) % CPU time spent on low priority processes
id idle cpu time (or) % CPU time spent idle
wa io wait cpu time (or) % CPU time spent in wait (on disk)
hi hardware irq (or) % CPU time spent servicing/handling hardware interrupts
si software irq (or) % CPU time spent servicing/handling software interrupts
steal time - - % CPU time in involuntary wait by virtual cpu while hypervisor is servicing another
st
processor (or) % CPU time stolen from a virtual machine
9663 stmprd 22 0 902m 301m 9888 S 2578.3 0.1 2:27.04 java
32117 etlprd 18 -1 32416 5908 1716 R 6.2 0.0 0:04.84
cleanup_dirfile
10053 root 18 -1 27100 1936 1460 S 4.9 0.0 0:00.15 ps
5456 pmartprd 16 0 1182m 130m 8560 S 3.9 0.1 38:39.72
pmserver
17492 deepak 16 0 30592 3388 1544 R 3.6 0.0 0:17.11 top
2843 pmartprd 15 0 730m 48m 4052 S 3.3 0.0 4:40.33
pmserver
2457 root 11 -5 0 0 0 S 2.9 0.0 11:42.39
kacpid
3731 tdmsprd 15 0 370m 49m 32m S 2.3 0.0 0:00.64
pmdtm.orig
3. Arrange Tasks with High to Low Memory Usage.
Press "M" or "shift+m"once top is running to arrange all the tasks with High to Low Memory Usage as
shown below.
top - 18:04:26 up 1 day, 3:09, 27 users, load average: 37.12, 34.56, 33.44
Tasks: 1676 total, 1 running, 1675 sleeping, 0 stopped, 0 zombie
Cpu(s): 2.3%us, 76.7%sy, 0.0%ni, 19.7%id, 1.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 264114424k total, 262605184k used, 1509240k free, 77924k buffers
Swap: 33554424k total, 3256k used, 33551168k free, 252198368k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
1852 pmartprd 18 0 2005m 319m 4776 S 6.9 4.1 28:34.32
java
2493 pmartprd 16 0 1397m 289m 9.8m S 0.0 4.0 18:37.79
pmrepagent
20557 etlprd 15 0 911m 201m 3024 S 0.0 3.0 17:09.02
pmdtm.orig
18778 root RT 0 286m 188m 156m S 0.0 2.1 13:24.98
aisexec
5456 pmartprd 15 0 1182m 130m 8560 S 6.2 1.1 38:40.58
pmserver
16004 etlprd 14 -1 179m 83m 2636 S 0.0 0.1 9:41.36 db2bp
11272 stmprd 25 0 906m 67m 9736 S 99.7 0.0 0:48.11 java
4. Change the nice value (priority) of any task
To understand what is nice value follow the below link
What is nice and how to change the priority of any process in Linux?
Press "r" when top is running on the terminal. You should get a prompt as shown below in blue color.
top - 18:08:38 up 115 days, 8:44, 4 users, load average: 0.03, 0.03, 0.00
Tasks: 325 total, 2 running, 323 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 6.4%sy, 0.0%ni, 93.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 49432728k total, 2063848k used, 47368880k free, 310072k buffers
Swap: 2097144k total, 0k used, 2097144k free, 1297572k cached
PID to renice: 1308 [Hit Enter]
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
5359 root 39 19 0 0 0 R 100.1 0.0 94:31:35
kipmi0
1308 deepak 16 0 29492 2292 1512 S 0.7 0.0 0:00.33 top
6116 root 15 0 369m 30m 11m S 0.7 0.1 77:24.97 cimserver
Give the PID whose nice value has to be changed and hit "Enter". Then give the nice value for the PID
top - 18:08:38 up 115 days, 8:44, 4 users, load average: 0.03, 0.03, 0.00
Tasks: 325 total, 2 running, 323 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 6.4%sy, 0.0%ni, 93.3%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 49432728k total, 2063848k used, 47368880k free, 310072k buffers
Swap: 2097144k total, 0k used, 2097144k free, 1297572k cached
Renice PID 1308 to value: -1 [Hit Enter]
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
5359 root 39 19 0 0 0 R 100.1 0.0 9431:35
kipmi0
1308 deepak 16 0 29492 2292 1512 S 0.7 0.0 0:00.33 top
6116 root 15 0 369m 30m 11m S 0.7 0.1 77:24.97 cimserver
Verify the changes :
top - 18:09:06 up 115 days, 8:45, 4 users, load average: 0.13, 0.06, 0.01
Tasks: 325 total, 1 running, 324 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.0%us, 0.1%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 49432728k total, 2063276k used, 47369452k free, 310072k buffers
Swap: 2097144k total, 0k used, 2097144k free, 1297588k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
1308 deepak 15 -1 29492 2292 1512 S 0.7 0.0 0:00.42 top
5359 root 34 19 0 0 0 S 0.7 0.0 9431:42
kipmi0
1 root 15 0 10352 692 580 S 0.0 0.0 0:02.16 init
2 root RT -5 0 0 0 S 0.0 0.0 0:02.37
migration/0
3 root 34 19 0 0 0 S 0.0 0.0 0:00.00
ksoftirqd/
5. Kill any task
Press "k" on the terminal when top is running. You should get a prompt as shown below in blue color
top - 18:09:31 up 115 days, 8:45, 4 users, load average: 0.08, 0.05, 0.01
Tasks: 325 total, 1 running, 324 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 49432728k total, 2062036k used, 47370692k free, 310072k buffers
Swap: 2097144k total, 0k used, 2097144k free, 1297596k cached
PID to kill:1308
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+
COMMAND
5359 root 34 19 0 0 0 S 1.3 0.0 9431:42
kipmi0
6460 root 24 0 179m 30m 3976 S 1.0 0.1 79:04.77
java
1308 deepak 15 -1 29492 2292 1512 S 0.7 0.0
0:00.49 top
1434 root 15 0 29492 2288 1516 R 0.7 0.0 0:00.13
top
top - 18:09:31 up 115 days, 8:45, 4 users, load average: 0.08, 0.05, 0.01
Tasks: 325 total, 1 running, 324 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.1%us, 0.1%sy, 0.0%ni, 99.8%id, 0.1%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 49432728k total, 2062036k used, 47370692k free, 310072k buffers
Swap: 2097144k total, 0k used, 2097144k free, 1297596k cached