19dcs153 Ac Prac File
19dcs153 Ac Prac File
Class: 7CSE2
Subject Name: Advance Computing
Semester: VII
Subject Code: CS451
Academic year: 2022-23
Advance Computing [CS451] ID : 20DCS109
PRACTICAL 1
AIM:
To implement Cloud-based infrastructures and services, it is required to set up the complete system
requirements. Researchers & industry-based developers can focus on specific system design issues that
they want to investigate, without taking more concerned about the low-level details so the Cloudsim is
very much useful for these activities and it can support simulation environment to implement cloud-
based infrastructure solutions. Overview of Cloudsim functionalities:
o Support for modeling and simulation of large-scale Cloud computing data centers
o Support for modeling and simulation of virtualized server hosts, with customizable policies for
provisioning host resources to virtual machines
o Support for modeling and simulation of data center network topologies and message-passing
applications
o Support for dynamic insertion of simulation elements, stop and resume of simulation
o Support for user-defined policies for allocation of hosts to virtual machines and policies for
allocation of host resources to virtual machines
Perform Cloud Computing Set up using Cloudsim Tool:
o Introduction to Cloudsim tool.
o Perform Installation steps of Cloudsim on NetBeans.
THEORY:
CloudSim is a simulation toolkit that supports the modeling and simulation of the core functionality of
cloud, like job/task queue, processing of events, creation of cloud entities(datacenter, datacenter brokers,
etc), communication between different entities, implementation of broker policies, etc. This toolkit
allows to:
o Test application services in a repeatable and controllable environment.
o Tune the system bottlenecks before deploying apps in an actual cloud.
o Experiment with different workload mix and resource performance scenarios on simulated
infrastructure for developing and testing adaptive application provisioning techniques
Core features of CloudSim are:
o The Support of modeling and simulation of large scale computing environment as federated cloud
data centers, virtualized server hosts, with customizable policies for provisioning host resources to
virtual machines and energy-aware computational resources
o It is a self-contained platform for modeling cloud’s service brokers, provisioning, and allocation
policies.
o It supports the simulation of network connections among simulated system elements.
o Support for simulation of federated cloud environment, that inter-networks resources from both
private and public domains.
o Availability of a virtualization engine that aids in the creation and management of multiple
independent and co-hosted virtual services on a data center node.
o Flexibility to switch between space shared and time shared allocation of processing cores to
virtualized services.
Process:
- You need to have a setup file of the NetBeans JAVA into your setup.
- If you didn’t have the setup you can download from the following link:
https://netbeans.org/images_www/v6/download/community/8.2
- You can download any type of setup as per your requirements from the above mention web page.
- Right-click on the setup or you can Double-Click on the setup by using the mouse.
- Click on the next option.
- Select “Java with Ant” folder then select first option Java Application, Press next
- Now give a name to the project as you wish.
- Go to library, right click on it, a menu will come, click on “Add jars/Folders”
- Now browse the cloudsim folder which you have extracted from zip file .and go to that folder and select
“cloudsim-3.0.3.jar”.
CONCLUSION:
In this practical, we learnt about Netbeans and Cloudsim and installed both the tools in our system.
PRACTICAL 2
AIM:
Cloud Computing aims for Internet based application services to deliver reliable, secure, faulttolerant,
scalable infrastructure. It is a tremendous challenging task to model and schedule the different
applications and services on real cloud infrastructure which requires to handle different workload and
energy performance parameters. Consider the real-world analogy into cloudsim and
Perform following Programs:
o Write a program in cloudsim using NetBeans IDE to create a datacenter with one host and run four
cloudlets on it.
o Write a program in cloudsim using NetBeans IDE to create a datacenter with three hosts andrun
three cloudlets on it.
Code(1):
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
*/
private static List<Cloudlet> cloudletList;
private static List<Vm> vmlist;
try {
int num_user = 1;
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false;
Vm vm = new Vm(vmid, brokerId, mips, pesNumber, ram, bw, size, vmm, new
CloudletSchedulerTimeShared());
vmlist.add(vm);
broker.submitVmList(vmlist);
cloudletList = new ArrayList<Cloudlet>();
int id = 0;
long length = 400000;
long fileSize = 300;
long outputSize = 300;
UtilizationModel utilizationModel = new UtilizationModelFull();
broker.submitCloudletList(cloudletList);
CloudSim.startSimulation();
CloudSim.stopSimulation();
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);
Log.printLine("CloudSim example finished!");
} catch (Exception ex) {
ex.printStackTrace();
Log.printLine("Unwanted error occured");
}
}
return datacenter;
}
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
Output(1):
Code(2):
import java.text.DecimalFormat;
import java.util.ArrayList;
import java.util.Calendar;
import java.util.LinkedList;
import java.util.List;
import org.cloudbus.cloudsim.Cloudlet;
import org.cloudbus.cloudsim.CloudletSchedulerTimeShared;
import org.cloudbus.cloudsim.Datacenter;
import org.cloudbus.cloudsim.DatacenterBroker;
import org.cloudbus.cloudsim.DatacenterCharacteristics;
import org.cloudbus.cloudsim.Host;
import org.cloudbus.cloudsim.Log;
import org.cloudbus.cloudsim.Pe;
import org.cloudbus.cloudsim.Storage;
import org.cloudbus.cloudsim.UtilizationModel;
import org.cloudbus.cloudsim.UtilizationModelFull;
import org.cloudbus.cloudsim.Vm;
import org.cloudbus.cloudsim.VmAllocationPolicySimple;
import org.cloudbus.cloudsim.VmSchedulerTimeShared;
import org.cloudbus.cloudsim.core.CloudSim;
import org.cloudbus.cloudsim.provisioners.BwProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.PeProvisionerSimple;
import org.cloudbus.cloudsim.provisioners.RamProvisionerSimple;
try {
int num_user = 1;
Calendar calendar = Calendar.getInstance();
boolean trace_flag = false;
// vmlist.add(vm);
vmlist.add(vm0);
vmlist.add(vm1);
vmlist.add(vm2);
broker.submitVmList(vmlist);
cloudletList.add(cloudlet1);
broker.submitCloudletList(cloudletList);
CloudSim.startSimulation();
CloudSim.stopSimulation();
List<Cloudlet> newList = broker.getCloudletReceivedList();
printCloudletList(newList);
Log.printLine("Cloudsim finished");
} catch (Exception ex) {
ex.printStackTrace();
Log.printLine("Unwanted error occured");
}
}
int hostId = 0;
int ram = 2048;
long storage = 1000000;
int bw = 10000;
if (cloudlet.getCloudletStatus() == Cloudlet.SUCCESS) {
Log.print("SUCCESS");
CONCLUSION:
In this practical, we about cloudsim architecture and implemented several different scenarios using
different number of datacenters, hosts and cloudlets.
PRACTICAL 3
AIM:
Perform following using Cloud Analyst:
o Install a Cloud Analyst and Integrate with NetBeans. Monitor the performance of an Existing
Algorithms given in Cloud Analyst.
Modify or propose a new load balancing algorithm compatible with Cloud Analyst.
THEORY:
CloudAnalyst:
o Cloud Analyst is a tool developed at the University of Melbourne whose goal is to support
evaluation of social networks tools according to geographic distribution of users and data centers.
o In this tool, communities of users and data centers supporting the social networks are characterized
and, based on their location; parameters such as user experience while using the social network
application and load on the data center are obtained/logged.
Process:
- Download CloudAnalyst from
- http://www.cloudbus.org/cloudsim/CloudAnalyst.zip
- Extract Files from the Zip file which will give following folder structure.
- If you want to Run from Command line then type the following command in cmd.
- java -cp jars\simjava2.jar;jars\gridsim.jar;jars\iText-2.1.5.jar;classes;.cloudsim.ext.gui.GuiMain
- Here we are creating 5 copies of one of them so it will give us total of 6 hosts.
- We can also customize user base that is models a group of users and generates traffic representing the
users.
- You can Save this Configuration as well in case you want to use it later. It is stored as .sim file. XML
data is generated and saved as Sim file.
- Then we can run simulation that would give us overall report of simulation.
CONCLUSION:
In this practical, we have learned about cloudAnalyst and simulated a simple case with single datacenter
with 6 hosts and single user base.
PRACTICAL 4
AIM:
Perform following using Google Cloud Platform:
o Introduction to Google Cloud.
o Perform Google Cloud Hands-on Labs.
Create and setup a Virtual Machine, GCP Essentials and Compute Engine: Qwik Start – Windows on
Google Cloud Platform.
o Compute Engine: Qwik Start
THEORY:
Introduction To Google Cloud Platform
o Google has been one of the leading software and technology developer in the world.
o Every year Google comes up with different innovations and advancement in the technological field
which is brilliant and helps the people all over the world.
o In the recent years, Google Cloud Platform is one of such innovations that have seen an increase in
its usage because more and more people are adopting Cloud. Since there has been a great demand in
the computing needs, a number of Google cloud services have been launched for global customers.
What is Google Cloud Platform?
o Google cloud platform is a medium with the help of which people can easily access the cloud
systems and other computing services which are developed by Google.
o The platform includes a wide range of services that can be used in different sectors of cloud
computing, such as storage and application development. Anyone can access the Google cloud
platform and use it according to their needs.
Various Elements of Google Cloud Platform
o As you can understand that Google cloud platform is made up of a different set of elements which
are helpful for the people in multiple ways.
o Here, in this section coming up, we are going to talk about various such elements which are present
in the Google Cloud.
o Google Compute Engine: This computing engine has been introduced with the IaaS service by
Google which effectively provides the VMs similar to Amazon EC2.
o Google Cloud App Engine: The app engine has the PaaS service for the correct hosting
applications directly. This is a very powerful and important platform which helps to develop mobile
and different web applications.
o Google Cloud Container Engine: this particular element is helpful because it allows the user to run
the docker containers present up on the Google Cloud Platform that is effectively triggered by
Kubernetes.
o Google Cloud Storage: the ability to store data and important resources on the cloud platform is
very important. Google cloud platform has been popular with the storage facilities and have allows
the users to back up or store data on the cloud servers which can be accessed from anywhere at any
time.
o Google BigQuery Service: the Google BigQuery Service is an efficient data analysis service which
enables the users to analyze their business for Big data. It also has a high level of storage facility
which can hold up to terabytes of storage.
o Google Cloud Dataflow: the cloud data flow allows the users to manage consistent parallel data-
processing pipelines. It helps to manage the lifecycle of Google Compute servers of the pipelines
that are being processed.
o Google Cloud Job Discovery: the Google Cloud Platform is also a great source for job search,
career options etc. The advanced search engine and machine learning capabilities make it possible to
find out different ways of finding jobs and business opportunities.
o Google Cloud Test Lab: this service provided by Google allows the users to test their apps with the
help of physical and virtual devices present in the cloud. The various instrumentation tests and
robotic tests allow the users to get more insights about their applications.
o Google Cloud Endpoints: this particular feature helps the users to develop and maintain secured
application program interface running on the Google Cloud Platform.
o Google Cloud Machine Learning Engine: as the name suggests, this element present in Google
Cloud helps the users to develop models and structures which enables the users to concentrate on
Machine learning abilities and framework.
- This browser tab contains the lab instructions. When you start a lab, the lab environment in this case, the
Google Cloud Console user interface opens in a new browser tab. You will need toswitch between the
two browser tabs to read the instructions and then perform the tasks.
- Depending on your physical computer setup, you could also move the two tabs to separate monitors.
Accessing the Cloud Console
- Start the lab
o Now that you understand the key features and components of a lab, click Start Lab. It may take a
moment for the Google Cloud environment and credentials to spin up. When the timer starts
counting down and the Start Lab button changes to a red End Lab button, everything is in place and
you're ready to sign in to the Cloud Console.
- Connection Details pane
o Now that your lab instance is up and running, look at the left pane. It contains an Open Google
Console button, credentials (username and password), and a Project ID field.
2. Copy the Username from the Connection Details pane, paste it in the Email or phone field, and click
Next.
3. Copy the Password from the Connection Details pane, paste it in the Password field, and click Next.
4. Click I understand to indicate your acknowledgement of Google's terms of service and privacy
policy.
5. On the Protect your account page, click Confirm.
6. On the Welcome student! page, check Terms of Service to agree to Google Cloud's terms of service,
and click Agree and continue.
o Your project has a name, ID, and number. These identifiers are frequently used when interacting
with Google Cloud services. You are working with one project to get experience with a specific
service or feature of Google Cloud.
- Task 2. View all projects
- You actually have access to more than one Google Cloud project. In fact, in some labs you may be
given more than one project in order to complete the assigned tasks.
1. In the Google Cloud Console title bar, next to your project name, click the drop-down menu.
2. In the Select a project dialog, click All. The resulting list of projects includes a "Qwiklabs
Resources" project.
- Navigation menu and services
o The Google Cloud Console title bar also contains a button labeled with a three-line icon:
o The Dialogflow API allows you to build rich conversational applications (e.g., for Google Assistant)
without having to understand the underlying machine learning and natural language schema.
3. Click Enable.
4. Click the back button in your browser to verify that the API is now enabled.
5. Click Try this API. A new browser tab displays documentation for the Dialogflow API. Explore this
information, and close the tab when you're finished.
6. To return to the main page of the Cloud Console, on the Navigation menu, click Cloud overview.
- Ending your lab
- Now that you're finished with the lab, click End Lab and then click Submit to confirm it.
5. Under Operating system select Windows Server and under Version select Windows Server 2012 R2
Datacenter, and then click Select. Leave all other settings as their defaults.
6. Click Create to create the instance.
o gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and
supports tab-completion.
3. (Optional) You can list the active account name with this command:
o gcloud auth list Copied!
o content_copy
Output:
o However the server instance may not yet be ready to accept RDP connections, as it takes a while for
all the OS components to initialize.
o To see whether the server instance is ready for an RDP connection, run the following command at
your Cloud Shell terminal command line:
o gcloud compute instances get-serial-port-output instance-1 Copied!
o content_copy
o If prompted, type n and press Enter.
o Repeat the command until you see the following in the command output, which tells you that the OS
components have initialized and the Windows Server is ready to accept your RDP connection
(attempt in the next step).
o Instance setup finished. instance-1 is ready to use.
o Copied!
o content_copy
o Add your VM instance's External IP as your Domain. Click Connect to confirm you want to connect.
o Once you are securely logged in to your instance, you may find yourself copying and pasting
commands from the lab manual.
To paste, hold the CTRL-V keys (if you are a Mac user, using CMND-V will not work.)
If you are in a Powershell window, be sure that you have clicked in to the window or else the
paste shortcut won't work.
If you are pasting into putty, right click.
CONCLUSION:
In this practical, we have learned about Basic of Google Cloud Platform, as well ashow to make a
instance of windows in cloud.
PRACTICAL 5
AIM:
Introduction to cloud Shell and gcloud on Google Cloud. Perform Following task:
o Practice using gcloud commands.
o Connect to compute services hosted on Google Cloud.
Process:
Task 1. Configure your environment
- In this section, you'll learn about aspects of the development environment that you can adjust.
1. Set the region to
2. To view the project region setting, run the following command:
3. Set the zone to :
4. To view the project zone setting, run the following command:
3. In Cloud Shell, run the following gcloud command to view details about the project:
o Command details
gcloud compute allows you to manage your Compute Engine resources in a format that's
simpler than the Compute Engine API.
instances create creates a new instance.
gcelab2 is the name of the VM.
The --machine-type flag specifies the machine type as e2-medium.
5. List the Firewall rules for the default network where the allow rule matches an ICMP rule:
2. To continue, type Y.
2. Try to access the nginx service running on the gcelab2 virtual machine.
3. Add a tag to the virtual machine:
o gcloud compute instances add-tags gcelab2 --tags http-server,https-server
CONCLUSION:
In this practical, we have learned and practiced about how to run or use the google cloud shell.
PRACTICAL 6
AIM:
Perform Cluster orchestration with Google Kubernetes Engine.
Process:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
CONCLUSION:
In this practical, we have deployed and deleted containerized application to Kubernetes Engine.
PRACTICAL 7
AIM:
Set Up Network and HTTP Load Balancers on Google Cloud Platform.
Process:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
- Task 1: Set the default region and zone for all resources
1. In cloud shell write following command to set default zone:
a. gcloud config set compute/zone
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash apt-get update
apt-get install apache2 -y service apache2 restart echo "
<h3>Web Server: www1</h3>" | tee /var/www/html/index.html'
2. Create a virtual machine www2 in your default zone gcloud compute instances create www2 \
--zone= \
--tags=network-lb-tag \
--machine-type=e2-medium \
--image-family=debian-11 \
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash apt-get update
apt-get install apache2 -y service apache2 restart echo "
<h3>Web Server: www2</h3>" | tee /var/www/html/index.html'
3. Create a virtual machine www3 in your default zone. gcloud compute instances create www3 \
--zone= \
--tags=network-lb-tag \
--machine-type=e2-medium \
--image-family=debian-11 \
--image-project=debian-cloud \
--metadata=startup-script='#!/bin/bash apt-get update
apt-get install apache2 -y service apache2 restart echo "
<h3>Web Server: www3</h3>" | tee /var/www/html/index.html'
5. Run the following to list your instances. You'll see their IP addresses in the EXTERNAL_IP
column:
a. gcloud compute instances list
6. Verify that each instance is running with curl, replacing [IP_ADDRESS] with the IP address for
each of your VMs:
a. curl http://[IP_ADDRESS]
3. Add a target pool in the same region as your instances. Run the following to create the target pool
and use the health check, which is required for the service to function:
gcloud compute target-pools create www-pool \ --region --http-health-check basic-check
4. Use curl command to access the external IP address, replacing IP_ADDRESS with an external IP
address from the previous command:
while true; do curl -m1 $IPADDRESS; done
--target-tags=allow-health-check \
--rules=tcp:80
4. Now that the instances are up and running, set up a global static external IP address that your
customers use to reach your load balancer
gcloud compute addresses create lb-ipv4-1 \
--ip-version=IPV4 \
--global
gcloud compute addresses describe lb-ipv4-1 \
--format="get(address)" \
--global
8. Create a URL map to route the incoming requests to the default backend service: gcloud compute
backend-services add-backend web-backend-service \
--instance-group=lb-backend-group \
--instance-group-zone= \
--global
o This may take three to five minutes. If you do not connect, wait a minute, and then reload the
browser.
o Your browser should render a page with content showing the name of the instance that served the
page, along with its zone (for example, Page served from: lb-backend-group-xxxx).
CONCLUSION:
In this practical, we have built a network load balancer and an HTTP(S) load balancer and practiced
using instance template and managed instance group.
PRACTICAL 8
AIM:
Create and Manage Cloud Resources: Challenge Lab on Google cloud Platform.
Process:
Active Cloud Shell:
1. Click Activate Cloud Shell at the top of the Google Cloud console.
2. Click Continue.
o In the Cloud Console, on the top left of the screen, select Navigation menu > Compute Engine >
VM Instances:
o Enter details as following to create a VM Instance:
Name for the VM instance : nucleus-jumphost
Region : leave Default Region
Zone : leave Default Zone
Machine Type : f1-micro (Series - N1)
Boot Disk : use the default image type (Debian Linux)
Create
Create a URL map, and target the HTTP proxy to route requests to your URL map.
Create a forwarding rule.
--default-service web-server-backend
CONCLUSION:
In this practical, we have learnt learnt how to create an Instance, Kubernetes cluster and setup an HTTP
load balancer.
PRACTICAL 9
AIM:
Create and Setup Amazon Elastic Compute Cloud (EC2) on Amazon cloud Platform.
Create and setup monitoring service for AWS cloud resources and the applications you run on AWS
(Amazon CloudWatch).
Create an AWS Identity and Access Management (IAM) group and user, attach a policy and add a user
to a group.
Process:
- Task 1: Create and Setup Amazon Elastic Compute Cloud (EC2) on Amazon cloud Platform.
1. First, login into your AWS account and click on “services” present on the left of the AWS
management console, i.e. the primary screen. And from the drop-down menu of options, tap on
“EC2”. Here is the image attached to refer to.
2. In a while, the EC2 console will be loaded onto your screen. Once it is done, from the list of options
on the left in the navigation pane, click on “Instances”. Please refer to the image attached ahead for a
better understanding.
3. A new fresh screen will be loaded in a while. In the right corner, there will be an orange box named
“Launch Instance”. Click on that box and wait. Here is the image to refer to.
DEPSTAR | CSE Page |
55
Advance Computing [CS451] ID : 20DCS109
4. Now, the process of launching an EC2 instance will start. The next screen will display you a number
of options to choose your AMI(Amazon Machine Image) from it. And horizontally, on the menu bar
you will see, there is a 7-step procedure written to be followed for successfully launching an
instance. I have chosen “Amazon Linux 2 AMI” as my AMI. And then go ahead, click “Next”. Refer
to the image for any confusion.
5. Now, comes sub-step-2 out of 7-steps of creating the instance, i.e. “Choose Instance Type”. I have
chosen “t2 micro” as my instance type because I am a free tier user and this instance type is eligible
to use for us. Then click “Next”. Refer to the image attached ahead for better understanding.
6. Further comes sub-step 3 out of the 7-step process of creating the instance, i.e. “Configure Instance”.
Here we will confirm all the configurations we need for our EC2. By default, the configurations are
filled, we just confirm them or alter them as per our needs and click “Next” to proceed. Here’s the
image for better understanding and resolving confusion.
7. Next comes sub step 4 out of the 7-step process of creating the instance, i.e. “Add Storage”. Here we
will look at the pre-defined storage configurations and modify them if they are not aligned as per our
requirements. Then click “Next”. Here’s the image of the storage window attached ahead to
understand better.
8. Next comes sub-step 5 out of the 7-step process of creating the instance, i.e. “Add Tags”. Here we
will just click “Next” and proceed ahead. Here’s the image to refer to.
9. Now we will complete the 6th sub step out of the 7-step process of creating the instance, which is
“Configure Security Group”. In Security Group, we have to give a group name and a group
description, followed by the number and type of ports to open and the source type. In order to
resolve confusion please refer to the image attached ahead.
10. Now we will complete the last step of the process of creating the instance, which is “Review”. In
review, we will finally launch the instance and then a new dialog box will appear to ask for the “Key
Pair”. Key pairs are used for authentication of the user when connecting to your EC2. We will be
given two options to choose from, whether to choose an existing key pair or creating a new one and
downloading it to launch. It is not necessary to create a new key pair every time you can use the
previous one as well. Here is the image of the window attached.
- Task 2: Create and setup monitoring service for AWS cloud resources and the applications you
run on AWS (Amazon CloudWatch).
o Notifying website management team when the instance on which website is hosted stops
Whenever the CPU utilization of instance (on which website is hosted) goes above 80%,
cloudwatch event is triggered. This cloudwatch event then activates the SNS topic which sends
the alert email to the attached subscribers.
1. Let us assume that you have already launched an instance with the name tag ‘instance’.
3. You will be directed to this dashboard. Now specify the name and display name.
8. Select Email as protocol and specify the email address of subscribers in Endpoint. Click on create
the subscription. Now Go to the mailbox of the specified email id and click on Subscription
confirmed.
9. Go to the cloudwatch dashboard on the AWS management console. Click on Metrics in the left pane.
12. This dashboard shows the components of Amazon Cloudwatch such as Namespace, Metric Name,
Statistics, etc
DEPSTAR | CSE Page |
62
Advance Computing [CS451] ID : 20DCS109
13. Select the greater threshold. Also, specify the amount( i.e 80 ) of the threshold value. Click on Next.
14. Click on Select an existing SNS topic, also mention the name of the SNS topic you created now.
15. Specify the name of alarm and description which is completely optional. Click on Next and then
click on Create alarm.
16. You can see the graph which notifies whenever CPU utilization goes above 80%.
- Task 3: Create an AWS Identity and Access Management (IAM) group and user, attach a policy
and add a user to a group.
o Select user →Click to Add user, provide a username, and select any one or both access
type(programmatic access and AWS Management Console access), select auto-generated
password or custom password(give your own password).
o Click on Next: Tags provide key and value to your user which will be helpful in searching when
you have so many IAM users.
o Click on Reviews check all the configurations and make changes if needed.
o Click on create user and your IAM user is successfully created and as you have chosen
programmatic access an Access key ID and a secret access key.
o Give group name →next step. Give permissions /attach policies to the group.
o Click on the next step (check group configuration and make changes if needed).
o By default, an IAM group does not have any IAM user we have to add a user to it and remove
the user if required.
o To add IAM user in IAM group. Inside your IAM group that you have created →go to Users
→click on Add Users to Group → click to Add User. User successfully added.
CONCLUSION:
In this practical, we have learnt how to Create and Setup Amazon Elastic Compute Cloud and how to
Create and setup monitoring service for AWS cloud resources and the applications you run on AWS and
also Create an AWS Identity and Access Management (IAM) group and user, attach a policy and add a
user to a group.
PRACTICAL 10
AIM:
Create and setup Amazon Simple Storage Service (Amazon S3) Block Public Access on Amazon Cloud
Platform.
THEORY:
- Amazon Web Services, Inc. (AWS) is a subsidiary of Amazon providing on-demand cloud computing
platforms and APIs to individuals, companies, and governments, on a metered pay-as-you-go basis.
These cloud computing web services provide a variety of basic abstract technical infrastructure and
distributed computing building blocks and tools.
- One of these services is Amazon Elastic Compute Cloud (EC2), which allows users to have at their
disposal a virtual cluster of computers, available all the time, through the Internet. AWS's virtual
computers emulate most of the attributes of a real computer, including hardware central processing units
(CPUs) and graphics processing units (GPUs) for processing; local/RAM memory; hard-disk/SSD
storage; a choice of operating systems; networking; and pre-loaded application software such as web
servers, databases, and customer relationship management (CRM).
- The AWS technology is implemented at server farms throughout the world, and maintained by the
Amazon subsidiary.
- Fees are based on a combination of usage (known as a "Pay-as-you-go" model), hardware, operating
system, software, or networking features chosen by the subscriber required availability, redundancy,
security, and service options. Subscribers can pay for a single virtual AWS computer, a dedicated
physical computer, or clusters of either.
- As part of the subscription agreement, Amazon provides security for subscribers' systems. AWS
operates from many global geographical regions including 6 in North America.
- Amazon markets AWS to subscribers as a way of obtaining large scale computing capacity more
quickly and cheaply than building an actual physical server farm.
- All services are billed based on usage, but each service measures usage in varying ways. As of 2017,
AWS owns 33% of all cloud (IaaS, PaaS) while the next two competitors Microsoft Azure and Google
Cloud have 18%, and 9% respectively, according to Synergy Group.
PRACTICAL:
- Login to your AWS account and go to products and select Amazon Simple Storage Service (S3). Before
you begin hosting your awesome static website out of S3, you need a bucket first. For this blog post, it is
critical that your bucket has the same name as your domain name.
- If your website domain is www.my-awesome-site.com, then your bucket name must be www.my-
awesome-site.com.
- The reasoning for this has to do with how requests are routed to S3. The request comes into the bucket,
and then S3 uses the Host header in the request to route to the appropriate bucket.
- Alright, you have your bucket. It has the same name as your domain name, yes? Time to configure the
bucket for static website hosting.
- Navigate to S3 in the AWS Console.
o Click into your bucket.
o Click the “Properties” section.
o Click the “Static website hosting” option.
o Select “Use this bucket to host a website”.
- Your bucket is configured for static website hosting, and you now have an S3 website url like this
http://www.my-awesome-site.com.s3-website-us-east-1.amazonaws.com/.
By default, any new buckets created in an AWS account deny you the ability to add a public access
bucket policy. This is in response to the recent leaky buckets where private information has been
exposed to bad actors. However, for our use case, we need a public access bucket policy. To allow this
you must complete the following steps before adding your bucket policy.
o Click into your bucket.
o Select the “Permissions” tab at the top.
o Under “Public Access Settings” we want to click “Edit”.
o Change “Block new public bucket policies”, “Block public and cross-account access if bucket
has public policies”, and “Block new public ACLs and uploading public objects” to be false and
Save.
- Now you must update the Bucket Policy of your bucket to have public read access to anyone in the
world. The steps to update the policy of your bucket in the AWS Console are as follows:
o Navigate to S3 in the AWS Console.
o Click into your bucket.
o Click the “Permissions” section.
o Select “Bucket Policy”.
o Add the following Bucket Policy and then Save
- Remember S3 is a flat object store, which means each object in the bucket represents a key without any
hierarchy. While the AWS S3 Console makes you believe there is a directory structure, there isn’t.
- Everything stored in S3 is keys with prefixes.
CONCLUSION:
In this practical, we have learnt about AWS and hosted our static website on AWS using S3 service.
PRACTICAL 11
AIM:
Create and deploy project using AWS Amplify Hosting Service of AWS.
THEORY:
- Amazon Web Services are some of the most useful products we have access to. One such service that is
becoming increasingly popular as days go by is AWS Amplify. It was released in 2018 and it runs on
Amazon’s cloud infrastructure. It is in direct competition with Firebase, but there are features that set
them apart.
- Why is it needed?
o User experience on any application is the most important aspect that needs to be taken care of.
AWS Amplify helps unify user experience across platforms such as web and mobile. This makes
it easier for a user to choose which one would they be more comfortable with. It is useful in case
of front end development as it helps in building and deployment. Many who use it claim that it
actually makes full-stack development a lot easier with its scalability.
- Main features:
o Can be used for authenticating users which are powered by Amazon Cognito.
o With help from Amazon AppSync and Amazon S3, it can securely store and sync data
seamlessly between applications.
o As it is serverless, making changes to any back-end related cases has become simpler. Hence,
less time is spent on maintaining and configuring back-end features.
o It also allows for offline synchronization.
o It promotes faster app development.
o It is useful for implementing Machine Learning and AI-related requirements as it is powered by
Amazon Machine learning services.
o It is useful for continuous deployment.
o Various AWS services are used for various functionalities. AWS Amplify offers. The main
components are libraries, UI components, and the CLI tool chain. It also provides static web
hosting using AWS Amplify Console.
- Task 1: Log in to the AWS Amplify Console and choose Get Started under Deploy.
o Connect a branch from your GitHub, Bitbucket, GitLab, or AWS Code Commit repository.
Connecting your repository allows Amplify to deploy updates on every code commit to a branch.
CONCLUSION:
In this practical, we have learnt how to Create and deploy project using AWS Amplify Hosting Service
of AWS.
PRACTICAL 12
AIM:
Simulating networks using iFogSim.
THEORY:
- siFogSim is a java programming language based API that inherits the established API of Cloudsim to
manage its underlying discrete event-based simulation. It also utilizes the API of CloudsimSDN for
relevant network-related workload handling.
- iFogSim Simulation Toolkit is another simulator used for the implementation of the Fog computing-
related research problem.
- This course will help you to follow the simulation-based approach of iFogSim and can leverage various
benefits like:
- Installing iFogSim
o The iFogSim library can be downloaded from the URL https://github.com/Cloudslab/iFogSim.
This library is written in Java, and therefore the Java Development Kit (JDK) will be required to
customise and work with the toolkit.
o After downloading the compression toolkit in the Zip format, it is extracted and a folder
iFogSim- master is created. The iFogSim library can be executed on any Java based integrated
development environment (IDE) like Eclipse, Netbeans, JCreator, JDeveloper, jGRASP, BlueJ,
IntelliJ IDEA or Jbuilder.
o In order to integrate iFogSim on an Eclipse ID, we need to create a new project in the IDE
o Once the library is set up, the directory structure of iFogSim can be viewed in the Eclipse IDE in
Project Name -> src.
o There are numerous packages with Java code for different implementations of fog computing,
IoT and edge computing.
o To work with iFogSim in the graphical user interface (GUI) mode, there is a file
called FogGUI.java in org.fog.gui.example. This file can be directly executed in the IDE, and
there are different cloud and fog components that can be imported in the simulation working area
as shown in Figure 3.
o In Fog Topology Creator, there is a Graph menu, where there is the option to import the topology
CONCLUSION:
In this practical, we have learnt how to Simulating networks using iFogSim.
PRACTICAL 13
AIM:
A Comparative Study of Docker Engine on Windows Server vs Linux Platform Comparing the feature
sets and implementations of Docker on Windows and Linux and Build and Run Your First Docker
Windows Server Container Walkthrough installing Docker on Windows 10, building a Docker image
and running a Windows container.
THEORY:
- What does it mean to Windows community?
o It means that Windows Server 2016 natively supports Docker containers now on-wards and
offers two deployment options – Windows Server Containers and Hyper-V Containers, which
offer an additional level of isolation for multi-tenant environments.The extensive partnership
integrates across the Microsoft portfolio of developer tools, operating systems and cloud
infrastructure including:
Windows Server 2016
Hyper-V
Visual Studio
Microsoft Azure
- What does it mean to Linux enthusiasts?
- In case you are Linux enthusiast like me, you must be curious to know how different does Docker
Engine on Windows Server Platform work in comparison to Linux Platform. Under this post, I am going
to spend considerable amount of time talking about architectural difference, CLI which works under
both the platform and further details about Dockerfile, docker compose and the state of Docker Swarm
under Windows Platform.
- Let us first talk about architectural difference of Windows containers Vs Linux containers.
- Looking at Docker Engine on Linux architecture, sitting on the top are CLI tools like Docker compose,
Docker Client CLI, Docker Registry etc. which talks to Docker REST API. Users communicates and
interacts with the Docker Engine and in turn, engine communicates with containerd. Containerd spins up
runC or other OCI compliant run time to run containers. At the bottom of the architecture, there are
underlying kernel features like namespaces which provides isolation and control groups etc. which
implements resource accounting and limiting, providing many useful metrics, but they also help ensure
that each container gets its fair share of memory, CPU, disk I/O; and, more importantly, that a single
container cannot bring the system down by exhausting one of those resources.
o Under Windows, it’s slightly a different story. The architecture looks same for the most of the
top level components like same Remote API, same working tools (Docker Compose, Swarm) but
as we move down, the architecture looks different. In case you are new to Windows kernel, the
Kernel within the Windows is somewhat different than that of Linux because Microsoft takes
somewhat different approach to the Kernel’s design. The term “Kernel mode” in Microsoft
language refers to not only the Kernel itself but the HAL(hal.dll) and various system services as
well. Various managers for Objects, processes, Memory, Security, Cache, Plug in Play (PnP),
Power, Configuration and I/O collectively called Windows Executive(ntoskrnl.exe) are available.
There is no kernel feature specifically called namespace and cgroup on Windows. Instead,
Microsoft team came up with new version of Windows Server 2016 introducing “Compute
Service Layer” at OS level which provides namespace, resource control and UFS like
capabilities. Also, as you see below, there is NO containerd and runC concept available under
Windows Platform. Compute Service Layer provides public interface to container and does the
responsibility of managing the containers like starting and stopping containers but it doesn’t
maintain the state as such. In short, it replaces containerd on windows and abstracts low level
capabilities which the kernel provides.
o You need Windows 2016 Server Evaluation build 14393 or later to taste the newer Docker
Engine on Win2k16. If you try to follow the usual Docker installation process on your old
Windows 2016 TP5 system, you will get the following error
- Start-Service Docker
o Now you can search plenty of Windows Dockerized application using the below command:
- Important Points:
1. Linux containers doesn’t work on Windows Platform.(see below)
FROM microsoft/windowsservercore
LABEL Description=”MySql” Vendor=”Oracle” Version=”5.6.29″ RUN powershell -Command \
$ErrorActionPreference = ‘Stop’; \
Invoke-WebRequest -Method Get -Uri https://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-
5.6.29-winx64.zip -OutFile c:\mysql.zip ; \
Expand-Archive -Path c:\mysql.zip -DestinationPath c:\ ; \ Remove-Item c:\mysql.zip -Force
RUN SETX /M Path %path%;C:\mysql-5.6.29-winx64\bin RUN powershell -Command \
$ErrorActionPreference = ‘Stop’; \ mysqld.exe –install ; \
Start-Service mysql ; \ Stop-Service mysql ; \ Start-Service mysql
RUN type NUL > C:\mysql-5.6.29-winx64\bin\foo.mysql
RUN echo UPDATE user SET Password=PASSWORD(‘mysql123′) WHERE User=’root’; FLUSH
PRIVILEGES; .> C:\mysql-5.6.29-winx64\bin\foo.mysql
RUN mysql -u root mysql < C:\mysql-5.6.29-winx64\bin\foo.mysql
- This just brings up the MySQL image perfectly. I had my own version of MySQL Dockerized image
available which is still under progress. I still need to populate the Docker image details.
CONCLUSION:
In this practical, we have learnt learnt how to Docker Engine works on windows and linux operating
system and instating also building a docker image and running a windows container.