exec_cc
exec_cc
GOOD SHEPHERD
COLLEGE OF ENGINEERING AND TECHNOLOGY
LAB MANUAL
Regulation 2021
PREPARED BY
LIST OF EXPERIMENTS
TABLE OF CONTENTS
MARK
S.NO. DATE EXPERIMENT TITLE SIGN.
S/10
Install Virtualbox / VMware
Workstation with different flavours of
1. linux or windows OS on top of
windows7 or 8.
AIM:
To Install Virtual box/VMware Workstation with different flavours of linux or windows OS on top of
windows7 or 8 Procedure
3. Type the Name for the virtual machine, like Ubuntu 16. VirtualBox will try to predict the
Type and Version based on the name you enter. Otherwise, select:
Type: Linux
Version: Ubuntu (64-bit)
and click Next.
4. Next we need to specify how much memory to allocate the virtual machine. According to the
Ubuntu system requirements we need 2GB, but I’d recommend more if your host can handle it.
Basically the higher you can set the memory without severly impacting your host machine, the
better the performance of the guest machine. If you’re not sure, stick with 2GB.
5. On the Hardware screen select Create a virtual hard disk now and click Create
6. Accept the default option VDI for Hard disk file type (or change it if you wish…) and click
Next
7. Next we are prompted for Storage on physical hard disk. The options are Dynamically
allocated and Fixed size. We’ll use the default of Dynamically allocated. Click Next
8. Choose the hard disk size and storage location. The Ubuntu system requirements recommend
25GB. Remember, we choose Dynamically allocated as our storage option in the last step, so we
won’t consume all this disk space immediately. Rather, VirtualBox will allocate it as required,
up to the maximum 25GB we specified. Click Create
9. The wizard will finish and we are returned to the main VirtualBox window. Click Settings
10. In the left pane select Storage, then in the right select the CD icon with the word Empty
beside it.
11. Under Attributes click the CD icon (highlighted in the screenshot above) and select
Choose Virtual Optical Disk File and browse to the downloaded file ubuntu-16.04.1- desktop-
amd64.iso
12. Click OK to close the Settings dialog window. The virtual machine should now be ready
to start.
Install Ubuntu
In VirtualBox your VM should be showing as Powered Off, and the optical drive configured to point to
the Ubuntu ISO file we downloaded previously.
1. In VirtualBox, select the virtual machine Ubuntu 16 and click Start. VirtualBox will launch a
new window with the vm and boot from the iso.
5. You will be prompted with a warning saying the changes will be written to disk. Click
Continue
9. The Ubuntu installation may take several minutes to run, so have another coffee.
10. When the installation is finished you will be prompted to restart. Save and close
anything else you may have open and click Restart Now
11. Now when the vm reboots you may see this message.
Navigate back into the Storage settings where we previously selected the iso file. If the Ubuntu
iso file is still there, remove it. Otherwise close the Settings window and in the vm press Enter to
proceed.
12. If all went well the VM should boot to the Ubuntu login screen. Enter your password to
continue.
Ubuntu should run normally in the VirtualBox environment. If everything is far too small, you can adjust
the ‘zoom’ by selecting View > Scale Factor > 200%.
Have fun!
Result:
Thus the Virtual box/Oracle Virtual machine has installed Succesfully.
Aim :
Procedure:
step1:
Install the centos or ubuntu in the VMware or Oracle Virtual Machine as per previous commands.
Step 2:
Login into the VM of installed OS.
Step 3:
If it is ubuntu then, for gcc installation
$ sudo add-apt-repository ppa:ubuntu-toolchain-r/test
$ sudo apt-get update
$ sudo apt-get install gcc-6 gcc-6-base
Step 4:
Write a sample program like
Welcome.cpp
#include<iostream.h>
using namespace std;
int main()
{
cout<<”Hello world”;
return 0;
}
10
Step 5:
First we need to compile and link our program. Assuming the source code is saved in a file
welcome.cpp, we can do that using GNU C++ compiler g++, for example g++ -Wall -o
welcome welcome.cpp and output can be executed by ./welcome
Result:
Thus the GCC compiler has installed and executed in this sample program successfully.
11
AIM:
To install Google App Engine. Create hello world app and other simple web applications
using python/java.
PROCEDURE:
Steps to creation:
1. Click the Google Cloud Platform toolbar button .
2. Select Create New Project > Google App Engine Standard Java Project.
12
3. To create a Maven-based App Engine project, check Create as Maven Project and
enter a Maven Group ID and Artifact ID of your choosing to set the coordinates for
this project. The Group ID is often the same as the package name, but does not have to
be. The Artifact ID is often the same as or similar to the project name, but does not
have to be.
4. Click Next.
5. Select any libraries you need in the project.
6. Click Finish.
The wizard generates a native Eclipse project, with a simple servlet, that you can run and deploy from
the IDE.
13
Note: You may see a message that says, Port 8080 already in use. If so, you can run
your application on a different host or port.
14
To debug your project locally, complete the running the project locally steps, except select
Debug As > App Engine instead of Run As > App Engine in the context menu.
The server stops at the breakpoints you set and shows the Eclipse debugger view.
15
Note: You can also select Debug As > Debug on Server to debug your application on a
different host or port.
16
2. In the Run Configurations dialog, select an existing App Engine Local Server
launch configuration, or click the New launch configuration button to create one.
3. Select the Cloud Platform tab of your run configuration.
4. Select an account.
5. Select a project to assign a project ID to be used in the local run. It doesn't matter
which project you select because you won't actually connect to it.
6. As an alternative, if you aren't logged in or don't have a Cloud Project, you can
instead set the GOOGLE_CLOUD_PROJECT environment variable to a legal
string, such
as MyProjectId, in the Environment tab of the run configuration.
Result:
Thus the Google App Engine has installed and executed in this sample program effectively.
17
DATE: APPLICATIONS
AIM:
Procedure:
Step1:
Install an Eclipse and create GAE web application as per previous commands.
Step2 :
Deploying App Engine Standard Applications from Eclipse
The steps of creating a new App Engine app in the Google Cloud Console, authenticating with
Google, and deploying your project to App Engine.
If you see Manage Google Accounts instead of the Sign in to Google option, that means you
are already signed in, so you can skip these account sign in steps.
4. Your system browser opens outside of Eclipse and asks for the permissions it needs to
manage your App Engine Application.
18
5. Click Allow and close the window. Eclipse is now signed into your account.
6. Ensure that the appengine-web.xml file is in the WEB-INF folder of
your web application.
7. Ensure that the project has the App Engine Project facet. If you created it
using the wizard, it should already have this facet. Otherwise:
8. Right click the project in the Package Explorer to bring up the context menu.
9. Select Configure > Convert to App Engine Project.
Deploy the Project to App Engine
1. Right click the project in the Package Explorer to open the context menu.
2. Select Deploy to App Engine Standard.
3. A dialog pops up.
4. Select the account you want to deploy with, or add a new account.
5. The list of projects the account has access to loads. Select the one you want to deploy to.
6. Click OK.
19
A background job launches that deploys the project to App Engine. The output of the job is
visible in the Eclipse Console view.
By default, App Engine stops the previous version of your application and immediately
promotes your new code to receive all traffic. If you'd rather manually promote it later
using gcloud or the Google Cloud Console, uncheck Promote the deployed version to receive
all traffic. If you don't want to stop the previous version, uncheck Stop previous version.
Result:
Thus the Google App Engine has launched in this sample program agreeably.
20
Aim :
Procedure:
1. Before you start, It is essential that the cloudsim should already installed/setup on your
local computer machine. In case you are yet to install it, you may follow the process of
Cloudsim setup using Eclipse IDE
3. There are eleven steps that are followed in each example with some variation in
them, specified as follows:
Step1 :Set the Number of users for the current simulation. This user count is directly
21
characteristics along with the host list. This is the most important entity without this
there is no way the simulation of hosting the virtual machine is applicable.
hostList.add(
22
new VmSchedulerTimeShared(peList)
);
String arch =
"x86"; String
os = "Linux";
String vmm =
"Xen"; double
time_zone =
10.0;double
cost = 3.0;
double costPerMem
= 0.05; double
costPerStorage =
0.001;
23
int mips =
1000; long
size = 10000;
9ioint ram =
512; long bw
= 1000;
int pesNumber = 1;
newCloudletSchedulerTimeShared());
vmlist.add(vm);
int id = 0;
long length =
400000; long
fileSize = 300;
long
outputSize =
300;
UtilizationModel utilizationModel = new UtilizationModelFull();
cloudlet.setUserId(brokerId);
cloudlet.setVmId(vmid);
Step 8: Submit Cloudlets to Datacenter broker.
broker.submitCloudletList(cloudletList); 24
25
cloudlet = list.get(i);
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS)
Log.print("SUCCESS");
Log.printLine(indent + indent +
cloudlet.getResourceId()
cloudlet.getVmId()
Once you Run the example the output for cloudsimExample1.java will be displayed like:
26
Result :
Thus the Simulation of a cloud scenario using CloudSim and run a scheduling algorithm has
implemented successfully
27
AIM:
To Find a procedure to transfer the files from one virtual machine to another virtual
machine.
PROCEDURE:
Step 1: Open Opennebula service from root user and view in localhost:9869
Step 2: Create oneimage, onetemplate and one vm as like earlier Creating oneimage
oneadmin@linux:~/datastores$ oneimage create --name "Ubuntu" –path
"/home/linux/Downloads/source/tubuntu1404-5.0.1.qcow2c" --driver qcow2 -- datastore default
Creating One Template:
oneadmin@linux:~/datastores$ onetemplate create --name "ubuntu1" --cpu 1 --vcpu 1 --
memory 1024 --arch x86_64 --disk "Ubuntu" --nic "private" --vnc –ssh
Instantiating OneVm (oneemplate)
oneadmin@linux:~/datastores$ onetemplate instantiate "ubuntu1"
This will move the VM from host01 to host02. The onevm list shows something like the
following
oneadmin@linux:~/datastores$ onevm list
28
Result :
Thus the virtual machine transfer the files from one virtual machine to
another virtual machine from one node to the other has executed successfully.
29
Steps:
Install Hadoop
Step 1: Click here to download the Java 8 Package. Save this file in your
home directory.
Step 5: Add the Hadoop and Java paths in the bash file (.bashrc).
Open. bashrc file. Now, add Hadoop and Java Path as shown below.
Command: vi .bashrc
For applying all these changes to the current Terminal, execute the source command.
To make sure that Java and Hadoop have been properly installed on your system
and can be accessed through the Terminal, execute the java -version and hadoop
Downloaded by Nandini Pandithurai ([email protected])
lOMoARcPSD|30946019
version commands.
Command: cd hadoop-2.7.3/etc/hadoop/
Command: ls
Step 7: Open core-site.xml and edit the property mentioned below inside
configuration tag:
core-site.xml informs Hadoop daemon where NameNode runs in the cluster. It contains
configuration settings of Hadoop core such as I/O settings that are common to HDFS &
MapReduce.
Command: vi core-site.xml
1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
6 </property>
7 </configuration>
Step 8: Edit hdfs-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi hdfs-site.xml
1
2 <?xml version="1.0" encoding="UTF-8"?>
3 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>
4 <property>
5 <name>dfs.replication</name>
6 <value>1</value>
7 </property>
<property>
8 <name>dfs.permission</name>
9 <value>false</value>
10 </property>
</configuration>
11
Step 9: Edit the mapred-site.xml file and edit the property mentioned below
inside configuration tag:
In some cases, mapred-site.xml file is not available. So, we have to create the mapred-
site.xml file using mapred-site.xml template.
Command: vi mapred-site.xml.
1
<?xml version="1.0" encoding="UTF-8"?>
2 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
3 <configuration>
4 <property>
5 <name>mapreduce.framework.name</name>
<value>yarn</value>
6 </property>
7 </configuration>
Step 10: Edit yarn-site.xml and edit the property mentioned below inside
configuration tag:
Command: vi yarn-site.xml
1
2
<?xml version="1.0">
3 <configuration>
4 <property>
5 <name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
6 </property>
7 <property>
8 <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</
name>
9
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
1 </property>
0 </configuration>
1
Step 11: Edit hadoop-env.sh and add the Java Path as mentioned below:
hadoop-env.sh contains the environment variables that are used in the script to run
Hadoop like Java home path, etc.
Command: vi hadoop–env.sh
Command: cd
Command: cd hadoop-2.7.3
This formats the HDFS via NameNode. This command is only executed for the first
time. Formatting the file system means initializing the directory specified by the
dfs.name.dir variable.
Never format, up and running Hadoop filesystem. You will lose all your data stored in
the HDFS.
Step 13: Once the NameNode is formatted, go to hadoop-2.7.3/sbin directory and start all the
daemons.
Command: cd hadoop-2.7.3/sbin
Either you can start all daemons with a single command or do it individually.
Command: ./start-all.sh
Start NameNode:
The NameNode is the centerpiece of an HDFS file system. It keeps the directory tree of
all files stored in the HDFS and tracks all the file stored across the cluster.
Start DataNode:
Start ResourceManager:
ResourceManager is the master that arbitrates all the available cluster resources and
thus helps in managing the distributed applications running on the YARN system.
Its work is to manage each NodeManagers and the each application’s
ApplicationMaster.
Start NodeManager:
The NodeManager in each machine framework is the agent which is responsible for
managing containers, monitoring their resource usage and reporting the same to the
ResourceManager.
Start JobHistoryServer:
JobHistoryServer is responsible for servicing all job history related requests from client.
Step 14: To check that all the Hadoop services are up and running, run the below command.
Command: jps
Result:
Thus the Hadoop one cluster was installed and simple applications executed
successfully.
AIM:
Once the installation is complete, you can verify it by checking the Docker version and making sure the
Docker daemon is running.
For those using the Ubuntu operating system, you can verify the Docker installation by running the following
command:
1.docker --version
With Docker successfully installed, you can now access the Docker command-line interface (CLI) to start
creating and managing containers.
The CLI provides a set of commands for interacting with Docker, allowing you to build, run, and manage
containers with ease.
Some of the key concepts in Docker revolve around creating a Dockerfile, which is a text document that
contains all the commands a user could call on the command line to assemble an image.
The Dockerfile contains all the information Docker needs to build the image. Let’s take a look at how to
define a simple Dockerfile and some best practices for writing it.
Downloaded by Nandini Pandithurai ([email protected])
lOMoARcPSD|30946019
1.FROM alpine
Dockerfiles should follow best practices to ensure consistency, maintainability, and reusability.
One of the best practices is to use the official base images from Docker Hub, as they are well-maintained and
regularly updated. It’s also important to use specific versions of the base images to avoid unexpected changes.
1.FROM node:14
2.COPY . /app
3.WORKDIR /app
Best practices for writing Dockerfiles also include using a .dockerignore file to specify files and directories to
exclude from the context when building the image.
This helps to reduce the build context and improve build performance.
Some additional best practices for writing Dockerfiles include avoiding running commands as root, using
multi-stage builds for smaller images, and using environment variables for configuration.
To build and run your Docker container, you will need to follow a few simple steps.
First, you will need to build the Docker image from your Dockerfile.
Once the image is built, you can run your container using the Docker run command. In this section, we will
walk through each step in detail.
To build the Docker image from your Dockerfile, you will need to navigate to the directory where your
Dockerfile is located and run the following command:
This command will build the Docker image using the instructions specified in your Dockerfile.
Once the build process is complete, you will have a new Docker image ready for use.
To run your Docker container, you will need to use the Docker run command followed by the name of the
image you want to run.
For example:
Your Docker container is now up and running, ready to serve your application to the world.
Unlike traditional virtual machines, where you need to manually install and configure software, Docker
containers are designed to be easily managed and manipulated.
Let’s take a look at some key ways to manage your Docker containers.
With Docker, you can easily monitor the performance of your containers using built-in commands.
By running docker stats , you can view real-time CPU, memory, and network usage for all running
containers.
This can help you identify any resource bottlenecks and optimize your container performance.
The Docker CLI provides simple commands for stopping, starting, and removing containers.
The command
docker rm [container_name]
Additionally, you can use the docker ps command to list all running containers, and docker ps -a to see all
containers, including those that are stopped.
This gives you full visibility and control over your containers.
RESULT:
Grid Computing enables virtuals organizations to share geographically distributed resources as they
pursue common goals, assuming the absence of central location, central control, omniscience, and an
existing trust relationship.
(or)
Gridtechnologydemandsnewdistributedcomputingmodels,software/middlewaresupport,networkp
rotocols,andhardwareinfrastructures.
Nationalgridprojectsarefollowedbyindustrialgridplat-
formdevelopmentbyIBM,Microsoft,Sun,HP,Dell,Cisco,EMC,PlatformComputing,andothers.
Newgridserviceproviders(GSPs)andnewgridapplicationshaveemergedrapidly,similartothegrowtho
fInternetandwebservicesinthepasttwodecades.
gridsystemsareclassifiedinessentiallytwocategories:computationalordatagridsandP2Pgrids.
6.What are the business areas needs in Grid computing?
Life Sciences
Financial services
Higher Education
Engineering Services
Government
Collaborative games
NetBeans is an open-source integrated development environment (IDE) for developing with Java, PHP,
C++, and other programming languages. NetBeans is also referred to as a platform of modular components
used for developing Java desktop applications.
14. Define Apache Tomcat.
Apache Tomcat (or Jakarta Tomcat or simply Tomcat) is an open source servlet container developed by
the Apache Software Foundation (ASF). Tomcat implements the Java Servlet and the JavaServer Pages
(JSP) specifications from Sun Microsystems, and provides a "pure Java" HTTP web server environment for
Java code to run."
15. What is private cloud?
The private cloud is built within the domain of an intranet owned by a single organization.
Therefore, they are client owned and managed. Their access is limited to the owning clients and their
partners. Their deployment was not meant to sell capacity over the Internet through publicly accessible
interfaces. Private clouds give local users a flexible and agile private infrastructure to run service
workloads within their administrative domains.
Active VM on Host A
Alternate physical host may be preselected for migration
Block devices mirrored and free resources maintained
Stage 1: Reservation
Initialize a container on the target
hostStage 2: Iterative pre-copy
Suspend VM on host A
Generate ARP to redirect traffic to Host B
Synchronize all remaining VM state to Host B
Stage 4: Commitment
VM state on Host A is released
Stage 5: Activation
VM starts on Host B
Connects to local devices
Resumes normal operation