Cloud Lab Record
Cloud Lab Record
NAME: ..……………….....
YEAR:……………………
DEPT: …......……………..
GOVERNMENT COLLEGE OF ENGINEERING,
TIRUNELVELI - 627 007
2022 - 2023
Register Number:
CERTIFICATE
Place: Tirunelveli
Date:
1 Study of Virtualization 1
VIRTUALIZATION
VIRTUAL MACHINE
A virtual machine, commonly shortened to just VM, is no different than any other
physical computer like a laptop, smart phone or server. It has a CPU, memory, disks to store your
files and can connect to the internet if needed. While the parts that make up your computer (called
hardware) are physical and tangible, VMs are often thought of as virtual computers or software-
defined computers within physical servers, existing only as code.
TYPES OF VIRTUALIZATION
• Application virtualization
• Network virtualization
• Desktop virtualization
• Storage virtualization
• Server virtualization
• Data virtualization
APPLICATION VIRTUALIZATION
Application virtualization, also called application service virtualization, is a term under
the larger umbrella of virtualization. It refers to running an application on a thin client; a terminal
or a network workstation with few resident programs and accessing most programs residing on a
connected server.
1
NETWORK VIRTUALIZATION
DESKTOP VIRTUALIZATION
Desktop virtualization is the concept of isolating a logical operating system (OS) instance
from the client that is used to access it. There are several different conceptual models of desktop
virtualization, which can broadly be divided into two categories based on whether or not the
operating system instance is executed locally or remotely.
STORAGE VIRTUALIZATION
Storage virtualization is the process of grouping the physical storage from multiple
network storage devices so that it looks like a single storage device . The process involves
abstracting and covering the internal functions of a storage device from the host application, host
servers or a general network in order to facilitate the application and network-independent
management of storage.
SERVER VIRTUALIZATION
Server Virtualization Definition Server virtualization is the process of dividing a physical
server into multiple unique and isolated virtual servers by means of a software application. Each
virtual server can run its own operating systems independently.
DATA VIRTUALIZATION
This software acts as a bridge across multiple, diverse data sources, bringing critical
decision-making data together in one virtual place to fuel analytics .It provides a modern data layer
that enables users to access, combine, transform, and deliver datasets with breakthrough speed and
cost-effectiveness. Data virtualization technology gives users fast access to data housed throughout
the enterprise—including in traditional databases, big data sources, and cloud and IoT systems—
at a fraction of physical warehousing and extract/transform/load (ETL) time and cost.
2
VIRTUALIZATION SOFTWARE VMware
VMware, Inc. is a software company, well known in the field of system virtualization and
cloud computing. VMware's software allows users to create multiple virtual environments, or
virtual computer systems, on a single computer or server. Essentially, one computer or server could
be used to host, or manage, many virtual computer systems, sometimes as many as one hundred or
more. The software virtualizes hardware components such as the video card, network adapters, and
hard drive.
Xen
VIRTUAL BOX
VirtualBox is open-source software for virtualizing the x86 computing architecture. It acts
as a hypervisor, creating a VM (virtual machine) where the user can run another OS (operating
system).The operating system where VirtualBox runs is called the "host" OS. The operating system
running in the VM is called the "guest" OS. VirtualBox supports Windows, Linux, or macOS as
its host OS. When configuring a virtual machine, the user can specify how many CPU cores, and
how much RAM and disk space should be devoted to the VM.
KVM is a full virtualization solution for Linux on x86 hardware containing virtualization
extensions (Intel VT or AMD-V). It consists of a loadable kernel module, kvm.ko, that provides
the core virtualization infrastructure and a processor specific module, kvm-intel.ko or kvm-
amd.ko. Using KVM, one can run multiple virtual machines running unmodified Linux or
Windows images. Each virtual machine has private virtualized hardware: a network card, disk,
graphics adapter, etc. KVM is open source software.
3
RESULT
4
Ex:No.2 Installing VMware Workstation
AIM
To install Virtualbox/VMware Workstation with different flavours of linux
or windows OS on top of windows7 or 8.
PROCEDURE
6. In the End User Licence Agreement dialog box, check “I accept the terms in the Licence
Agreement” box and press next to continue.
5
7. Select the folder in which you would like to install the application. Also select Enhanced
Keyboard Driver check box.
8. Select “Check for Updates” and “Help improve VMware Workstation Pro”.
9. Select the place you want the shortcut icons to be placed on your system to launch the
application. Select both the options, desktop and start menu and click next.
10. Click install to start the installation process.
11. In installation complete dialog box, Click finish.
12. The VMware Workstation is now installed.
6
RESULT:
7
Ex:No.3 Installing C compiler in the virtual machine
AIM
To Install a C compiler in the virtual machine using the virtual box and execute
simple programs.
PROCEDURE
Step 1: Open Terminal (Applications-Accessories-Terminal).
8
Step 2: Open gedit by typing “gedit &” on terminal (You can also use any other Text Editor
application).
Step 3: Type the following on gedit (or any other text editor)
#include<stdio.h>
int main()
{
printf("Hello World\n");
}
9
Step 4: Save this file as “hellworld.c”.
Step 5: Type “ls” on Terminal to see all files under current folder.
Step 6: Conform that “helloworld.c” is in the current directory. If not, type cd
DIRECTORY_PATH to go to the directory that has “helloworld.c”.
Step 7: Type “gcc helloworld.c” to compile, and type “ls” to conform that a new executable file
“a.out” is created
10
Step 8 : Type “./a.out” on Terminal to run the program.
Step 9: If you see “Hello World” on the next line, you just successfully ran your first
C program.
Result
11
Ex.No:4a Creating Web Applications Using Google App Engine with command prompt
AIM
To install Google App Engine and create hello world app and other simple web
applications using Python.
PROCEDURE
12
5. Download the cloud SDK for windows (https://www.edureka.co/community/82162/how-to-
install-the-google-cloud-sdk-in-windows-system)
13
14
15
9. Login in gloud and get authenticated (which will be opened in the browser).
10. Download Python 2.7 from python.org and install
Project Creation
1. Create a folder with name of your choice (my_project)
2. Create the python file for the application. (index.py)
print ‘----------------CLOUD COMPUTING ------------------ ’
3. Create app.yaml file by taking the content from “
https://cloud.google.com/appengine/docs/standard/python/config/appref “ for configuring the
application with the google cloud
runtime: python27
api_version: 1
threadsafe: false
handlers:
- url: /
script: index.py (two spaces before script:index.py)
4. Open cloud SDK by giving run as administrator and run the yaml file app.yaml by giving the
following command and give the path of the application within double quotation: Google-cloud-
sdk\bin\dev-appserver.py “myproject/app.yaml”.
16
5. After execution goto browser and type http:localhost:8080 and view the result.
RESULT
Thus the Web Application using python program was created and executed
using Google App Engine.
17
Ex.No: 4b Creating Web Applications Using Google App Engine with launcher
AIM
To install Google App Engine and create hello world app and other simple web
applications using Python with launcher.
PROCEDURE
8.On Google app launcher, click on the application that has been created.
9. Click “Run” from the menu.
10. Application will be running on given port Eg.port no:14080.
18
11. Click on “Browse” and user will be redirected to the web browser.
19
Output
Result
Thus the Google App Engine launcher was installed and hello world app was created using
Python successfully.
20
Ex:No.5 SIMULATION OF SJF SCHEDULING ALGORITHM IN CLOUDSIM
AIM
To simulate a cloud scenario using CloudSim and run a scheduling algorithm that is
not present in CloudSim.
PROCEDURE
21
11. Submit the cloudlet list to the broker:
broker.submitCloudletList(cloudletList)
12. Start the simulation.
PROGRAM
Simulation.java:
package taskchpack;
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import java.util.Random;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerSpaceShared; import
org.cloudbus.cloudsim.CloudletSchedulerTimeShared; import
org.cloudbus.cloudsim.Datacenter; import
org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics; import
org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log; import
org.cloudbus.cloudsim.Pe; import
org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel; import
org.cloudbus.cloudsim.UtilizationModelFull; import
org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple; import
org.cloudbus.cloudsim.VmSchedulerTimeShared; import
org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple; import
org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
public class Simulation {
private static List<Cloudlet> cloudletList; private static
List<Vm> vmlist;
private static List<Vm> createVM(int userId, int vms) { LinkedList<Vm> list =
new LinkedList<Vm>();
long size = 10000; //image size (MB)
int ram = 512; //vm memory (MB)
int mips = 1000;
long bw = 1000;
int pesNumber = 1; //number of cpus String
vmm = "Xen"; //VMM name Vm[] vm = new
Vm[vms];
for(int i=0;i<vms;i++){
22
vm[i] = new Vm(i, userId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerSpaceShared());
//for creating a VM with a space shared scheduling policy for
cloudlets:
//vm[i] = Vm(i, userId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerSpaceShared());
list.add(vm[i]);
}
return list;
}
private static List<Cloudlet> createCloudlet(int userId, int
cloudlets){
LinkedList<Cloudlet> list = new LinkedList<Cloudlet>();
long length = 1000;
long fileSize = 300;
long outputSize = 300;
int pesNumber = 1;
UtilizationModel utilizationModel = new UtilizationModelFull();
Cloudlet[] cloudlet = new Cloudlet[cloudlets];
for(int i=0;i<cloudlets;i++){
Random r= new Random();
cloudlet[i] = new Cloudlet(i, length +r.nextInt(2000), pesNumber, fileSize,
outputSize, utilizationModel, utilizationModel, utilizationModel);
cloudlet[i].setUserId(userId);
list.add(cloudlet[i]);
}
return list ;
}
////////////////////////// STATIC METHODS ///////////////////////
public static void main(String[] args) { Log.printLine("Starting
CloudSimExample6...");
try {
int num_user = 3; // number of grid users Calendar calendar =
Calendar.getInstance(); boolean trace_flag = false; // mean
trace events
CloudSim.init(num_user,calendar,trace_flag);
Datacenter datacenter0 = createDatacenter("Datacenter_0");
Datacenter datacenter1 = createDatacenter("Datacenter_1");
DatacenterBrokerbroker=createBroker();
int brokerId = broker.getId();
vmlist =createVM(brokerId,10);//creating 20 vms
cloudletList = createCloudlet(brokerId,40); // creating 40
cloudletsbroker.submitVmList(vmlist);
broker.submitCloudletList(cloudletList)
CloudSim.startSimulation();
// Final step: Print results when simulation is over
23
List<Cloudlet> newList = broker.getCloudletReceivedList();
CloudSim.stopSimulation();
printCloudletList(newList)
//Print the debt of each user to each datacenter
//datacenter0.printDebts(); //datacenter1.printDebts();
Log.printLine("CloudSimExample6 finished!");
}
catch (Exception e)
{
e.printStackTrace();
Log.printLine("The simulation has been terminated due to an unexpected
error");
}
}
private static Datacenter createDatacenter(String name){
List<Host> hostList = new ArrayList<Host>();
List<Pe> peList1 = new ArrayList<Pe>();
int mips = 1000;
peList1.add(new Pe(0, new PeProvisionerSimple(mips)));
peList1.add(new Pe(1, new PeProvisionerSimple(mips)));
peList1.add(new Pe(2, new PeProvisionerSimple(mips)));
peList1.add(new Pe(3, new PeProvisionerSimple(mips)));
List<Pe> peList2 = new ArrayList<Pe>();
peList2.add(new Pe(0, new PeProvisionerSimple(mips)));
peList2.add(new Pe(1, new PeProvisionerSimple(mips)));
int hostId=0;
int ram = 2048; //host memory (MB)
long storage= 1000000; //host storage
int bw = 10000;
hostList.add(new Host(hostId,new RamProvisionerSimple(ram),new
BwProvisionerSimple(bw), storage,peList1,new
VmSchedulerTimeShared(peList1)));
hostId++;
hostList.add(new Host(hostId,new RamProvisionerSimple(ram),new
BwProvisionerSimple(bw),storage,peList2,new
VmSchedulerTimeShared(peList2)));
hostList.add(new Host(hostId,new CpuProvisionerSimple(peList1),new
RamProvisionerSimple(ram),new BwProvisionerSimple(bw),storage,new
VmSchedulerSpaceShared(peList1)));
//To create a host with a oportunistic space-shared allocation policy for PEs to
VMs:
hostList.add(new Host(hostId,new CpuProvisionerSimple(peList1),new
RamProvisionerSimple(ram),new BwProvisionerSimple(bw),storage,new
VmSchedulerOportunisticSpaceShared(peList1)));
String arch = “x86”;
String os = “Linux”;
24
String vmm = "Xen";
double time_zone = 10.0;
double cost = 3.0;
double costPerMem = 0,.05;
double costPerStorage = 0.1;
double costPerBw = 0.1;
LinkedList<Storage> storagelist = new LinkedList<Storage>();
DatacenterCharacteristics characteristics = new DatacenterCharacteristics(arch,
os, vmm, hostList, time_zone, cost, costPerMem, costPerStorage, costPerBw);
Datacenter datacenter = null;
try {
datacenter = new Datacenter(name, characteristics, new
VmAllocationPolicySimple(hostList), storageList, 0);
} catch (Exception e) {
e.printStackTrace();
return datacenter;
}
}
private static DatacenterBroker createBroker(){
DatacenterBroker broker = null;
try {
broker = new DatacenterBroker("Broker");
} catch (Exception e) {
e.printStackTrace();
return null;
}
return broker;
}
private static void printCloudletList(List<Cloudlet> list) {
int size = list.size();
Cloudlet cloudlet;
String indent = " ";
Log.printLine();
Log.printLine("========== OUTPUT ==========");
Log.printLine("Cloudlet ID" + indent + "STATUS" + indent+"Data center ID" +
indent + "VM ID" + indent + indent + "Time" + indent + "Start Time" + indent + "Finish
Time" +indent+"user id"+indent);
DecimalFormat dft = new DecimalFormat("###.##");
for (int i = 0; i < size; i++) {
cloudlet = list.get(i);
Log.print(indent + cloudlet.getCloudletId() + indent +indent);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS){
Log.print("SUCCESS");
Log.printLine( indent + indent + cloudlet.getResourceId() + indent
+ indent + indent + cloudlet.getVmId()+ indent + indent + indent +
dft.format(cloudlet.getActualCPUTime()) +indent + indent +
dft.format(cloudlet.getExecStartTime())+ indent + indent + indent +
dft.format(cloudlet.getFinishTime())+indent +cloudlet.getUserId());
25
}
}
}
}
DatacenterBroker.java:
package taskchpack;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.List;
import java.util.Map;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.core.CloudSimTags;
import org.cloudbus.cloudsim.core.SimEntity;
import org.cloudbus.cloudsim.core.SimEvent;
import org.cloudbus.cloudsim.lists.CloudletList;
import org.cloudbus.cloudsim.lists.VmList;
public class DatacenterBroker extends SimEntity {
protected List<? extends Vm> vmList;
protected List<? extends Vm> vmsCreatedList;
protected List<? extends Cloudlet> cloudletList;
protected List<? extends Cloudlet> cloudletSubmittedList;
protected List<? extends Cloudlet> cloudletReceivedList;
protected int cloudletsSubmitted;
protected int vmsRequested;
protected int vmsAcks;
protected int vmsDestroyed;
protected List<Integer> datacenterIdsList;
protected List<Integer> datacenterRequestedIdsList;
protected Map<Integer, Integer> vmsToDatacentersMap;
protected Map<Integer, DatacenterCharacteristics> datacenterCharacteristicsList;
public DatacenterBroker(String name) throws Exception {
super(name);
setVmList(new ArrayList<Vm>());
setVmsCreatedList(new ArrayList<Vm>());
setCloudletList(new ArrayList<Cloudlet>());
setCloudletSubmittedList(new ArrayList<Cloudlet>());
setCloudletReceivedList(new ArrayList<Cloudlet>());
cloudletsSubmitted = 0;
setVmsRequested(0);
setVmsAcks(0);
setVmsDestroyed(0);
26
setDatacenterIdsList(new LinkedList<Integer>());
setDatacenterRequestedIdsList(new ArrayList<Integer>());
setVmsToDatacentersMap(new HashMap<Integer, Integer>());
setDatacenterCharacteristicsList(new HashMap<Integer,
DatacenterCharacteristics>());
}
public void submitVmList(List<? extends Vm> list) {
getVmList().addAll(list);
}
public void submitCloudletList(List<? extends Cloudlet> list) {
getCloudletList().addAll(list);
}
public void bindCloudletToVm(int cloudletId, int vmId) {
CloudletList.getById(getCloudletList(),
cloudletId).setVmId(vmId);
}
@Override
public void processEvent(SimEvent ev) {
switch (ev.getTag()) {
case
CloudSimTags.RESOURCE_CHARACTERISTICS_REQUEST:
processResourceCharacteristicsRequest(ev);
break;
case
CloudSimTags.RESOURCE_CHARACTERISTICS:
processResourceCharacteristics(ev); break;
case CloudSimTags.VM_CREATE_ACK:
processVmCreate(ev);
break;
case CloudSimTags.CLOUDLET_RETURN:
processCloudletReturn(ev);
break;
// if the simulation finishes
case CloudSimTags.END_OF_SIMULATION:
shutdownEntity();
break;
// other unknown tags are processed by this method default:
processOtherEvent(ev);
break;
}
}
protected void processResourceCharacteristics(SimEvent ev) {
DatacenterCharacteristics characteristics = (DatacenterCharacteristics)
ev.getData();
getDatacenterCharacteristicsList().put(characteristics.getId(),
characteristics);
27
if (getDatacenterCharacteristicsList().size() == getDatacenterIdsList().size()) {
setDatacenterRequestedIdsList(new
ArrayList<Integer>());
createVmsInDatacenter(getDatacenterIdsList().get(0));
}
}
protected void processResourceCharacteristicsRequest(SimEvent
ev) {
setDatacenterIdsList(CloudSim.getCloudResourceList());
setDatacenterCharacteristicsList(new HashMap<Integer,
DatacenterCharacteristics>());
Log.printLine(CloudSim.clock() + ": " + getName() + ": Cloud Resource List
received with "
+ getDatacenterIdsList().size() + " resource(s)");
for (Integer datacenterId : getDatacenterIdsList()) {
sendNow(datacenterId,
CloudSimTags.RESOURCE_CHARACTERISTICS, getId());
}
}
protected void processVmCreate(SimEvent ev) { int[] data =
(int[]) ev.getData();
int datacenterId = data[0];
int vmId = data[1];
int result = data[2];
if (result == CloudSimTags.TRUE) { getVmsToDatacentersMap().put(vmId,
datacenterId);
getVmsCreatedList().add(VmList.getById(getVmList(), vmId));
Log.printLine(CloudSim.clock() + ": " + getName() + ":VM #" + vmId+ " has been created in
Datacenter #" +datacenterId + ", Host #"+
VmList.getById(getVmsCreatedList(),vmId).getHost().getId());
} else {
Log.printLine(CloudSim.clock() + ": " + getName() + ":
Creation of VM #" + vmId + " failed in Datacenter #" + datacenterId);
}
incrementVmsAcks();
if (getVmsCreatedList().size() == getVmList().size() - getVmsDestroyed()) {
submitCloudlets();
} else {
if (getVmsRequested() == getVmsAcks()) {
// find id of the next datacenter that has not been
tried
for (int nextDatacenterId : getDatacenterIdsList())
{
if
(!getDatacenterRequestedIdsList().contains(nextDatacenterId)) {
createVmsInDatacenter(nextDatacenterId);
return;
}
}
28
if (getVmsCreatedList().size() > 0) { // if some vm
were created
submitCloudlets();
} else { // no vms created. abort Log.printLine(CloudSim.clock() +
": " +getName()+ ": none of the required VMs could be
created. Aborting");
finishExecution();
}
}
}
}
protected void processCloudletReturn(SimEvent ev) { Cloudlet
cloudlet = (Cloudlet) ev.getData();
getCloudletReceivedList().add(cloudlet);
Log.printLine(CloudSim.clock() + ": " + getName() + ":
Cloudlet " + cloudlet.getCloudletId() + " received");
cloudletsSubmitted--;
if (getCloudletList().size() == 0 && cloudletsSubmitted == 0) {
Log.printLine(CloudSim.clock() + ": " + getName() + ":
All Cloudlets executed. Finishing...");
clearDatacenters();
finishExecution();
} else { // some cloudlets haven't finished yet
if (getCloudletList().size() > 0 && cloudletsSubmitted
==0){
// all the cloudlets sent finished. It means that some bount
// cloudlet is waiting its VM be created
clearDatacenters();
createVmsInDatacenter(0);
}
}
}
protected void processOtherEvent(SimEvent ev) { if (ev == null)
{
29
protected void createVmsInDatacenter(int datacenterId) {
// send as much vms as possible for this datacenter before trying the next one
int requestedVms = 0;
String datacenterName = CloudSim.getEntityName(datacenterId);
for (Vm vm : getVmList()) {
if
(!getVmsToDatacentersMap().containsKey(vm.getId())) {
Log.printLine(CloudSim.clock() + ": " + getName() + ": Trying to
Create VM #" + vm.getId() + " in " + datacenterName);
sendNow(datacenterId,
CloudSimTags.VM_CREATE_ACK, vm);
requestedVms++;
}
}
getDatacenterRequestedIdsList().add(datacenterId);
setVmsRequested(requestedVms); setVmsAcks(0);
}
protected void submitCloudlets() {
int vmIndex = 0;
List <Cloudlet> sortList= new ArrayList<Cloudlet>(); ArrayList<Cloudlet>
tempList = new ArrayList<Cloudlet>(); for(Cloudlet cloudlet: getCloudletList())
{
tempList.add(cloudlet);
}
int totalCloudlets= tempList.size(); for(int
i=0;i<totalCloudlets;i++)
{
Cloudlet smallestCloudlet= tempList.get(0);
for(Cloudlet checkCloudlet: tempList){
if(smallestCloudlet.getCloudletLength()>checkCloudlet.getClou
dl etLength())
{
smallestCloudlet= checkCloudlet;
}
}
sortList.add(smallestCloudlet);
tempList.remove(smallestCloudlet);
}
int count=1;
for(Cloudlet printCloudlet: sortList)
{
Log.printLine(count+".Cloudler
Id:"+printCloudlet.getCloudletId()+",Cloudlet Length:"+printCloudlet.getCloudletLength());
count++;
}
30
{
for (Cloudlet cloudlet : sortList) {
Vm vm;
if (cloudlet.getVmId() == -1) {
vm = getVmsCreatedList().get(vmIndex);
} else { // submit to the specific vm
vm = VmList.getById(getVmsCreatedList(),
cloudlet.getVmId());
if (vm == null) { // vm was not created
Log.printLine(CloudSim.clock() + ": " +
getName() + ": Postponing execution of cloudlet "+
cloudlet.getCloudletId() + ": bount VM not available");
continue;
}
}
Log.printLine(CloudSim.clock() + ": " + getName()
+ ": Sending cloudlet " + cloudlet.getCloudletId() + " to VM #" + vm.getId());
cloudlet.setVmId(vm.getId());
sendNow(getVmsToDatacentersMap().get(vm.getId()),
CloudSimTags.CLOUDLET_SUBMIT, cloudlet);
cloudletsSubmitted++;
vmIndex = (vmIndex + 1) % getVmsCreatedList().size();
getCloudletSubmittedList().add(cloudlet);
}
for (Cloudlet cloudlet : getCloudletSubmittedList()) {
getCloudletList().remove(cloudlet);
}
}
protected void clearDatacenters() {
for (Vm vm : getVmsCreatedList()) {
Log.printLine(CloudSim.clock() + ": " + getName() + ":
Destroying VM #" + vm.getId());
sendNow(getVmsToDatacentersMap().get(vm.getId()),
CloudSimTags.VM_DESTROY, vm);
}
getVmsCreatedList().clear();
}
protected void finishExecution() {
sendNow(getId(), CloudSimTags.END_OF_SIMULATION);
}
@Override
public void shutdownEntity() {
Log.printLine(getName() + " is shutting down...");
}
@Override
public void startEntity() { Log.printLine(getName() + " is
starting...");
31
schedule(getId(), 0,
CloudSimTags.RESOURCE_CHARACTERISTICS_REQUEST);
}
@SuppressWarnings("unchecked")
public <T extends Vm> List<T> getVmList() { return
(List<T>) vmList;
}
protected <T extends Vm> void setVmList(List<T> vmList) { this.vmList =
vmList;
}
@SuppressWarnings("unchecked")
public <T extends Cloudlet> List<T> getCloudletList() { return (List<T>)
cloudletList;
}
protected <T extends Cloudlet> void setCloudletList(List<T> cloudletList) {
this.cloudletList = cloudletList;
}
@SuppressWarnings("unchecked")
public <T extends Cloudlet> List<T> getCloudletSubmittedList() {
return (List<T>) cloudletSubmittedList;
}
protected <T extends Cloudlet> void setCloudletSubmittedList(List<T>
cloudletSubmittedList) {
this.cloudletSubmittedList = cloudletSubmittedList;
}
@SuppressWarnings("unchecked")
public <T extends Cloudlet> List<T> getCloudletReceivedList() {
return (List<T>) cloudletReceivedList;
}
protected <T extends Cloudlet> void setCloudletReceivedList(List<T>
cloudletReceivedList) {
this.cloudletReceivedList = cloudletReceivedList;
}
@SuppressWarnings("unchecked")
public <T extends Vm> List<T> getVmsCreatedList() {
return (List<T>) vmsCreatedList;
}
protected <T extends Vm> void setVmsCreatedList(List<T> vmsCreatedList) {
this.vmsCreatedList = vmsCreatedList;
}
protected int getVmsRequested() {
return vmsRequested;
}
protected void setVmsRequested(int vmsRequested) {
this.vmsRequested = vmsRequested;
}
32
protected int getVmsAcks() {
return vmsAcks;
}
protected void setVmsAcks(int vmsAcks) { this.vmsAcks
= vmsAcks;
}
protected void incrementVmsAcks() {
vmsAcks++;
}
protected int getVmsDestroyed() {
return vmsDestroyed;
}
protected void setVmsDestroyed(int vmsDestroyed) {
this.vmsDestroyed = vmsDestroyed;
}
protected List<Integer> getDatacenterIdsList() { return
datacenterIdsList;
}
protected void setDatacenterIdsList(List<Integer> datacenterIdsList) {
this.datacenterIdsList = datacenterIdsList;
}
protected Map<Integer, Integer> getVmsToDatacentersMap() { return
vmsToDatacentersMap;
}
protected void setVmsToDatacentersMap(Map<Integer, Integer>
vmsToDatacentersMap) {
this.vmsToDatacentersMap = vmsToDatacentersMap;
}
protected Map<Integer, DatacenterCharacteristics> getDatacenterCharacteristicsList() {
return datacenterCharacteristicsList;
}
protected void setDatacenterCharacteristicsList(
Map<Integer, DatacenterCharacteristics> datacenterCharacteristicsList) {
this.datacenterCharacteristicsList = datacenterCharacteristicsList;
}
protected List<Integer> getDatacenterRequestedIdsList() {
return datacenterRequestedIdsList;
}
protected void setDatacenterRequestedIdsList(List<Integer>
datacenterRequestedIdsList) {
this.datacenterRequestedIdsList = datacenterRequestedIdsList;
}
}
33
OUTPUT
Starting CloudSimExample1...
Initialising...
Starting CloudSim version 3.0
Datacenter_0 is starting...
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>null
Broker is starting...
Entities started.
: Broker: Cloud Resource List received with 1 resource(s) 0.0: Broker:
Trying to Create VM #0 in Datacenter_0
: Broker: VM #0 has been created in Datacenter #2, Host #0 0.1: Broker:
Sending cloudlet 0 to VM #0
400.1: Broker: Cloudlet 0 received
400.1: Broker: All Cloudlets executed. Finishing...
400.1: Broker: Destroying VM #0 Broker is
shutting down...
Simulation: No more future events
CloudInformationService: Notify all CloudSim entities for shutting down. Datacenter_0 is
shutting down...
Broker is shutting down...
Simulation completed.
Simulation completed.
========== OUTPUT ==========
Cloudlet ID STATUS Data center ID VM ID Time Start Time Finish Time
0 SUCCESS 2 0 400 0.1 400.1
*****Datacenter: Datacenter_0***** User id
Debt
3 35.6
CloudSimExample1 finished!
RESULT
Thus a cloud scenario was simulated using CloudSim and a scheduling algorithm that is not
present in CloudSim was executed successfully.
34
Ex.No: 6 FILE SHARING BETWEEN PHYSICAL MACHINE AND
VIRTUAL MACHINE
AIM:
To share files between physical machine and virtual machine.
PROCEDURE:
35
4. Open VMware workstation and go to VM tab and click install VMware tools.
If install VMWare tools are grayed out,then follow the steps.
36
10.If always enable was disabled then click on vmware tools folder in the
desktop and extract the files.After extracting open terminal and give the
following commands
$cd Desktop/
$ls
$cd vmware-tools-distrib/
$ls
$ sudo ./vmware-install.pl then enter the password and then enter yes and click enter.
37
11.Power off the vitual machine and power on.To check whether the VMWare tools are installed , open
the terminal enter the command $ vmware-toolbox-cmd --version
12.Follow the commands to share the files between physical and virtual layer
$vmaware-hgfsclient
$cd /mnt
38
$cd hgfs/
$ls using this command check whether your file is there.
13.Now in virtual machine open files folder ->open other location locations->open computers->
Open mnt folder->hgfs folder->file->text.txt file.
14.Update any changes in the text file and save it in virtual machine .
15. Goto Host Machine open the file and check for updation.
16. Make some changes in the file in host machine and check for the updation in the virtual machine.
39
RESULT:
AIM
PROCEDURE
42
4. Select Pool to external and then click Allocate IP.
5. Click Associate.
6. Now you will get a public IP, e.g. 8.21.28.120, for your instance.
Step 5: Configure Access & Security
1. Go to Compute > Access & Security and then open Security Groups tab.
2. In default row, click Manage Rules.
3. Click Add Rule, choose ALL ICMP rule to enable ping into your instance,
and then click Add.
4. Click Add Rule, choose HTTP rule to open HTTP port (port 80), and then
click Add.
5. Click Add Rule, choose SSH rule to open SSH port (port 22), and then click
Add.
6. You can open other ports by creating new rules.
Step 6: SSH to Your Instance
Now, you can SSH your instances to the floating IP address that you got in the step 4. If you are
using Ubuntu image, the SSH user will be ubuntu
43
OUTPUT
44
RESULT
Thus a procedure to launch virtual machine using TryStack was written successfully.
45
Ex:No.8 INSTALLING HADOOP SINGLE NODE CLUSTER
WORDCOUNT
AIM
To find procedure to set up the one node Hadoop cluster and run simple applications like
wordcount.
PROCEDURE
Step 1: Installing Java is the main prerequisite for Hadoop. Install java16 and set the path for the
JAVA_HOME environment variable.
Step 2: Install Apache Hadoop 3.1.0 for windows and Hadoop 3.1.0.
Step 3: Replace bin in Hadoop 3.1.0 by Apache Hadoop (rename original bin in Hadoop to bin1
and then copy bin to it).
Step 4: Set the Hadoop bin path to path in the environment variable.
Step 5: Set the JAVA_HOME path in hadoop-env.cmd.
(Hadoop-> etc-> Hadoop -> Hadoop-env.cmd)
Step 6: Open the Eclipse and create the Java Project.
Step 7: : Type the wordcount program in Eclipse.
Step 8: Add Client libraries jars and map-reduce jars(except examples jar) using the build path
option.
Step 9: Export the Project as a Jar File.
Step 11: Execute the Jar File in the cmd using the command
( hadoop jar <filename.jar> <inputdir> <outputdir>)
Step 12: Execute the Hadoop program using(hdfs dfs -cat <outputdir>/* ).
46
Program
import java.io.IOException;
import java.util.StringTokenizer;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileOuputFormat;
public class WordCount{
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable>{
private final static IntWritable one = new IntWritable(1); private Text word
= new Text();
public void map(Object key, Textvalue, Context context) throws
IOException, InterruptedException{
StringTokenizer itr = new StringTokenizer(value.toStri ng());
while (itr.hasMoreTokens()){
word.set(itr.nextToken());
output.collect(word, one);
}
}
}
public static class IntSumReducer extedns Reducer<Text, IntWritable, Text,
IntWritable>{
private IntWritable result = new IntWritable();
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throw IOException, InterruptedException{
47
int sum=0;
for(IntWritable val:values){
sum+=val.get();
}
result.set(sum);
context.write(key, result);
}
}
public static void main(String[] args) throws Exception{ Configuration
conf=new Configuration();
Job job=Job.getInstance(conf, “word count”);
Job.setJarByClass(WordCount.Class);
Job.setMapperClass(TokenixerMapper.Class);
Job.setCombinerClass(IntSumReducer.Class);
Job.setReducerClass(IntSumReducer.Class);
Job.setOutputKeyClass(Text.Class);
Job.OutputValueClass(IntWritable.Class);
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.addOutputPath(job, new Path(args[1])); System.exit(job.
waitForCompletion(true)?0:1);
}
}
48
Input File (input.txt)
49
Output
50
RESULT
Thus, the set up the one node Hadoop cluster and the number of words were counted
successfully.
51