2-Week2 Osa Theory
2-Week2 Osa Theory
WEEK-2
Virtualization is technology that allows you to create multiple simulated environments or dedicated
resources from a single, physical hardware system.
Hypervisor is a software which connects directly to the hardware and allows you to split one system
into separate, distinct, and secure environments known as virtual machines (VMs).
VM is an operating system installed on a hypervisor software.
These VMs rely on the hypervisor’s ability to separate the machine’s resources from the hardware
and distribute them appropriately.
The physical hardware, equipped with a hypervisor, is called the host, while the many VMs that use
its resources are guests.
These guests treat computing resources—like CPU, memory, and storage—as a pool of resources
that can easily be relocated.
Benefits of Virtualization
Partitioning: Run multiple operating systems on one physical machine.
Divide system resources between virtual machines.
Isolation: Virtual machines are completely isolated from the host machine and other virtual
machines. If a virtual machine crashes, all others are unaffected. Data does not leak across virtual
machines and applications can only communicate over configured network connections.
Encapsulation: Complete virtual machine environment is saved as a single file; easy to back up,
move and copy Standardized virtualized hardware is presented to the application - guaranteeing
compatibility.
Reduced heat and improved energy savings. Companies that use a lot of hardware servers risk
overheating their physical resources. The best way to prevent this from happening is to decrease the
number of servers used for data management, and the best way to do this is through virtualization.
Better for the environment. Companies and data centers that utilize abundant amounts of hardware
leave a large carbon footprint. Virtualization can help reduce these effects by significantly
decreasing the necessary amounts of cooling and power, thus helping clean the air and the
atmosphere. As a result, companies and data centers that virtualize will improve their reputation
while also enhancing the quality of their relationship with customers and the planet.
Easier migration to the cloud. Virtualization brings companies closer to experiencing a completely
cloud-based environment. Virtual machines may even be deployed from the data center in order to
build a cloud-based infrastructure. The ability to embrace a cloud-based mindset with virtualization
makes migrating to the cloud even easier.
Lack of vendor dependency. Virtual machines are agnostic in hardware configuration. As a result,
virtualizing hardware and software means that a company does not need to depend on a vendor for
these physical resources.
How does virtualization works:
Software called hypervisors separate the physical resources from the virtual environments—the
things that need those resources.
Hypervisors can sit on top of an operating system (like on a laptop) or be installed directly onto
hardware (like a server), which is how most enterprises virtualize.
Hypervisors take your physical resources and divide them up so that virtual environments can use
them.
Resources are partitioned as needed from the physical environment to the many virtual
environments.
Users interact with and run computations within the virtual environment (typically called a guest
machine or virtual machine).
The virtual machine functions as a single data file. And like any digital file, it can be moved from
one computer to another.
When the virtual environment is running and a user or program issues an instruction that requires
additional resources from the physical environment, the hypervisor relays the request to the physical
system and caches the changes—which all happens at close to native speed.
Types of Virtualization
Data Virtualization
Desktop virtualization allows a central administrator (or automated administration tool) to deploy
simulated desktop environments to hundreds of physical machines at once.
Unlike traditional desktop environments that are physically installed, configured, and updated on
each machine, desktop virtualization allows admins to perform mass configurations, updates, and
security checks on all virtual desktops.
Server virtualization
Servers are computers designed to process a high volume of specific tasks really well so other
computers—like laptops and desktops—can do a variety of other tasks.
Virtualizing a server lets it to do more of those specific functions and involves partitioning it so that
the components can be used to serve multiple functions.
Operating system virtualization
Fig: OS Virtualization
Operating system virtualization happens at the kernel—the central task managers of operating
systems.
It’s a useful way to run Linux and Windows environments side-by-side.
Enterprises can also push virtual operating systems to computers.
Increases security, since all virtual instances can be monitored and isolated.
Limits time spent on IT services like software updates.
Network functions virtualization
Network functions virtualization (NFV) separates a network's key functions (like directory services,
file sharing, and IP configuration) so they can be distributed among environments.
Once software functions are independent of the physical machines they once lived on, specific
functions can be packaged together into a new network and assigned to an environment.
Virtualizing networks reduces the number of physical components—like switches, routers, servers,
cables, and hubs—that are needed to create multiple, independent networks
And it’s particularly popular in the telecommunications industry.
Storage Virtualization
This is widely used in datacenters where you have a big storage and it helps you to create, delete,
allocated storage to different hardware.
This allocation is done through network connection. The leader on storage is SAN (Storage Area
Network). A schematic illustration is given below –
Virtual Machines
A Virtual Machine is the representation of a physical machine by software. It has its own set of
virtual hardware (e.g., RAM, CPU, NIC, hard disks, etc.) upon which an operating system and
applications are loaded.
The operating system sees a consistent, normalized set of hardware regardless of the actual physical
hardware components.
Software called a hypervisor separates the machine’s resources from the hardware and provisions
them appropriately so they can be used by the VM.
Virtual Machines are not emulators are simulators. They are real machines that can do the same
things physical computers can do and more.
Because of the flexibility of virtual machines, physical machines become less a way to provide
services (application, databases etc.) and more a way to house the virtual machines that provide
those resources.
Servers are easy to manage if they are on virtual machines.
Virtual machines also reduce cost of hardware, maintenance and environment support.
Containers
o Linux containers and virtual machines (VMs) are packaged computing environments that combine
various IT components and isolate them from the rest of the system. Their main differences are in terms
of scale and portability.
o Containers are typically measured by the megabyte. They don’t package anything bigger than an app
and all the files necessary to run, and are often used to package single functions that perform specific
tasks (known as a microservice). The lightweight nature of containers—and their shared operating
system (OS)—makes them very easy to move across multiple environments.
o VMs are typically measured by the gigabyte.
o They usually contain their own OS, allowing them to perform multiple resource-intensive functions at
once.
o The increased resources available to VMs allow them to abstract, split, duplicate, and emulate entire
servers, OSs, desktops, databases, and networks.
Linux boot process
The following are the 6 high level stages of a typical Linux boot process.
1. BIOS
BIOS stands for Basic Input/Output System
Performs some system integrity checks
Searches, loads, and executes the boot loader program.
It looks for boot loader in floppy, cd-rom, or hard drive. You can press a key (typically F12 of F2,
but it depends on your system) during the BIOS startup to change the boot sequence.
Once the boot loader program is detected and loaded into the memory, BIOS gives the control to it.
So, in simple terms BIOS loads and executes the MBR boot loader.
2. MBR
MBR stands for Master Boot Record.
It is located in the 1st sector of the bootable disk. Typically /dev/hda, or /dev/sda
MBR is less than 512 bytes in size. This has three components
1) primary boot loader info in 1st 446 bytes
2) partition table info in next 64 bytes
3) MBR validation check in last 2 bytes.
It contains information about GRUB (or LILO in old systems).
So, in simple terms MBR loads and executes the GRUB boot loader.
3. GRUB
GRUB stands for Grand Unified Bootloader.
If you have multiple kernel images installed on your system, you can choose which one to be
executed.
GRUB displays a splash screen, waits for few seconds, if you don’t enter anything, it loads the
default kernel image as specified in the grub configuration file.
GRUB has the knowledge of the filesystem (the older Linux loader LILO didn’t understand
filesystem).
Grub configuration file is /boot/grub/grub.conf (/etc/grub.conf is a link to this). The following is
sample grub.conf of CentOS.
4. Kernel
Mounts the root file system as specified in the “root=” in grub.conf
Kernel executes the /sbin/init program
Since init was the first program to be executed by Linux Kernel, it has the process id (PID) of 1. (Do
a ‘ps -ef | grep init’ and check the pid.)
initrd stands for Initial RAM Disk.
initrd is used by kernel as temporary root file system until kernel is booted and the real root file
system is mounted. It also contains necessary drivers compiled inside, which helps it to access the
hard drive partitions, and other hardware.
5. Init
Looks at the /etc/inittab file to decide the Linux run level.
Following are the available run levels
0 – halt
1 – Single user mode
2 – Multiuser, without NFS
Given the open nature of Bash, over time it has been adopted as the default shell on most Linux
systems.
First look at the command line
There are two types of users – the root or super user and normal users.
A root or super user can access all the files, while the normal user has limited access to files. In
computing, the superuser is a special user account used for system administration. Depending on the
operating system (OS), the actual name of this account might be root, administrator, admin or
supervisor.
Normal users are the users created by the root or another user with sudo privileges. Usually, a
normal user has a real login shell and a home directory. Each user has a numeric user ID called UID.
A super user can add, delete and modify a user account. The full account information is stored in
the /etc/passwd file and a hash password is stored in the file /etc/shadow.
Interpreters: A command line interpreter is any program that allows the entering of commands and then
executes those commands to the operating system. It's literally an interpreter of commands.
CLI over GUI
CLI : Command line Interface, allows the users to interact with the system using commands.
GUI : Graphical User Interface, allows the users to interact with the system using graphical elements such
as windows, icons, menus
Sl no CLI GUI
1 CLI is difficult to use. Whereas it is easy to use
2 It consumes low memory. While consumes more memory
3 In CLI we can obtain high precision. While in it, low precision is obtained
4 CLI is faster than GUI The speed of GUI is slower than CLI.
5 CLI’s appearance can not be While it’s appearance can be modified or changed.
modified or changed.
6 CLI operating system needs only While GUI operating system need both mouse and
keyboard. keyboard.
7 In CLI, input is entered only at While in GUI, input can be entered anywhere on the
command prompt. screen.
8 In CLI, the information is shown or While in GUI, the information is shown or
presented to the user in plain text presented to the user in any form such as: plain text,
and files videos, images, etc.
9 In CLI, there are no menus provided. While in GUI, menus are provided.
10 There are no graphics in CLI. While in GUI, graphics are used.
11 While it uses pointing devices for selecting and
CLI do not use any pointing devices. choosing items.
12 In CLI, spelling mistakes and typing Whereas in GUI, spelling mistakes and typing errors
errors are not avoided are avoided.
2. man -f ls (-f option is used to print the section number of the manual)
Assignment Questions