AWS MINI Project
AWS MINI Project
LAB-1: I AM HANDS ON
To secure our aws account we need to setup “Multi Factor Authentication (MFA)”.
Follow the below steps to create a MFA
This is the dashboard of identity and access management and there is no MFA for this
account.
Am setting a MFA using Authenticator app. First we need to download the twilio authy
application in mobile and then create account in that app, scan the above QR code.
By scanning the QR code we can get a OTP to our registered mobile number enter the two
consecutive OTPs. Now your account is secured using Authenticator app.
We can see my Security Credentials like type of my MFA , created date and identifier. Now sign
out your account and try to login once again.
While creating user I have checked with the permissions of user, by default I Have taken AWS
Management Console so I don’t get any permissions.
With assigning only aws management console access we cannot access any other services,
we get error by saying you do not have any permissions to access this service.
We cannot access or launch ec2 instance without giving ec2 access permission.
I am adding permissions to user using attach policy directly. Attaching a policy with the name
“AmazonEc2fullAccess”.
Here I attached a policy “AdministrativeAccess” to the user named Bhargavi-user. Now we will
check with the services.
After assigning Administrative access to the user I can able to access other services like Auto
scaling group,S3, Ec2ImageBuilder and EC2.
Final conclusion about this lab is we can able to create user and access all services using IAM
user using credentials of created user in IAM service, but we get the permissions to our IAM
user from root user(policies). Root user is having the authority to attach policy and remove
policy.
LAB-2: BILLING AND COST MANAGEMENT:
We get bills whenever we use the aws services so to limit the budget we use billing alarms.
Billing alarms triggers whenever we cross the limit that is set by us in budgets. Here we use
Billing and Cost Management to set the billing alarm.
This is the home page of billing and cost management, go to budgets to set billing alarm.
Here am creating template by selecting particular configurations. In this step am using Zero
spend budget that notifies us after we cross $0.01 which is above aws free tier.
Creating other budget alarm using template. Here we are setting billing alarm for monthly
usage using Monthly cost budget.
A monthly cost budget with $2.5 and the reciepent email. Here when we get notification
whenever we cross half or more than half of the monthly cost budget.
This is the setup with monthly Cost Budget and My Zero Spent Budget. Here it is showing “ok”
for monthly cost budget because we didn’t cross the amount $2.5, we cross the zero sprnt
budget so it is showing “Exceeded”.
Now to check where I have got bill, I need to select the bills option .
Inside the bills we can see the particular service with the amount we spent.
I have got the bills in EC2 with $0.80 because of using t2.medium and t2.large, For VPC with
$0.07 and total amount of $1.02.
LAB-3:S3 BUCKET:
S3 is a Simple Storage Service which is used to store the large amount of data using buckets
and objects.
Bucket:
We create buckets to store the data and the name of the bucket should be globally unique,
because to identify name of bucket by aws whenever we ask for data.
In buckets we store data using objects(files)
We store data in the form of files and folder. Here we can upload the files/folders directly from
local system or we create folders.
Step1: Creating S3 bucket
We can’t access the object because we don’t have access permissions because public access
is blocked.
Enabling the Access Control lists.
If the data is deleted wantedly or by mistakenly the data is lost if versioning is disabled.
Versioning is used as backup of the stored s3 bucket. After enabling versioning if any data is
deleted we can restore them
If we made any change in the files and to store them in s3 we need to upload the files/folders
again it will not updated automatically.
Enabling the versioning we can able to save data from unintended data.
We can see the new option with the show versions which means versioning is enabled.
Created a text file and uploaded in s3 bucket with some content in the text file.
Updated and uploaded the text file and we can see the changes by observing the above and
this image.
Step3:
Here it shows that versioning is enabled by seeing “show version” button. Here I have some
objects. Before selecting show version option I have deleted joel.txt file but it is not
permanently deleted we can identify this by seeing version objects. We can observe that the
file is present but if we delete the joel.txt(type-txt) it will be permanently deleted.
If we delete the file with “Delete marker” we can able to restore the objects.
LAB4:EC2 INSTANCE
Elastic Compute Cloud is a service which provides computational power(speed to access any
applications).
It will provide control over the virtual server, storage, security configurations and networking
settings.
To access an ec2 instance we need to provide security with inbound and outbound rules.
Inbound: Allowing the traffic coming into your EC2 instance from external sources.
Outbound: Traffic going from your EC2 instance to the outside source.
Creating a ec2 instance with ubuntu, select t2.micro as instance type for ec2 instance.
Allowing ssh(22) to connect with putty and with default storage of 8gb.
We can see the above created instance is in running state.
Connecting putty using public DNS and selecting the crentials of private key pair.
Accessing the local machine using putty using public ip, and private key pair and changed to
root user using sudo su – command.
Unable to connect the instance in putty and command prompt due to the cider with
0.0.0.0/28
LAB-6: VOLUMES AND SNAPSHOTS:
Volumes are used to store the data. By default when we launch a instance we get a storage
volume as 8Gb but it is deleted when we delete the instance so data inside the volume is
deleted. We create EBS volume to store data/persist data even after the termination of the
instance. This can’t be deleted until we delete it. It is used as backup.
Volumes can be regional specific and we need attach to the volume with the same availability
zone. We can detach the volume from one instance and can be immediately attached to other
instance with the same availability zone.
After creating a EBS volume, we need to attach this to our instance so that the data can be
stored in this volume.
Here we can see the two volumes with 8GB which is a default volume created with the
instance and 5GB which is created manually to store data even after the instance termination.
We can see that the default volume and created volume is in the same availability zone(us-
eat-1a).
Check the volumes to see the EBS volume using list block devices and disk free
<df -h> using this we can see all the mounted file systems with human readble sizes.
<lsblk> with this we can see all the mounted and unmounted volumes and shows the device
path and the size.
we can find the volume of 5Gb and the path of volume is xvdi with the mount path.
<mkfs -t ext4 /dev/xvdi> is used to create a new filesystem on a specified disk partition or
volume.
Make filesystem using the type of file system with extended filesystem version 4 and the files
created on the specific path(/dev/xvdi).
Create a folder to mount/attach the volume into that folder. Use the below command to attach
the volume to created folder so it can be used as regular folder to store the files and the data.
<mount /dev/xvdi(path) /volume(foder)>
Go inside the created folder and create any files that will be store in EBS volume.
We can check the modification in the size of volume using list block devices and disk free.
Snapshot is a copy/backup of EBS volume. We can restore a snapshot to create a new EBS
volume in the same or another region, which is useful for disaster recovery, data migration, or
duplicating environments.
Create a snapshot from the previously created volume so that we can attach the volume in
the same or different availability zone.
Creating a new volume from snapshot with 8gb storage.
We can create a new volume from the snapshot so that we can change the availability zone of
the volume and attach to the instance of different availability zone.
We can see the volume which is created from snapshot is attached to our instance .
LAB:7:AMI
Amazon Machine Image is a pre-configured virtual image contains all the necessary
components to launch an ec2 instance. For example if we want to launch 10 ec2 instances
with same configuration we can create an AMI from first instance and launch other instances.
Now the AMI consists all the configurations(operating system, storage, instance type) and
settings of that instance.
Here we can see the created AMI, by selecting the previously created image we can launch
any number of instances which requires the same configurations and reduce the time to
create instances.
LAB:8-LOAD BALANCER:
Load balancer is used to distributes the incoming traffic equally across the multiple servers.
To reduce the load on the server we use load balancer so that all the servers will be in available
state and preventing the failing of the server.
Create two ec2 instances and install nginx server on one machine and apache2 on server 2.
Creating two instances by allowing SSH and HTTP.
Install Httpd in one server and nginx in other server, start the two services.
Next copy the private ipaddresses of these instances and paste in the browser to access these
applications.
Install the httpd – apt install apache2 -y
Start the httpd by – systemctl start apache2
Install the nginx by using
apt install nginx -y
systemctl start nginx
Access both the server over browser and check if their web page is visible.
Here we can see the httpd home page by copying the public ip address. Without allowing http
port number we cannot access this webpages.
Here we can see the nginx home page by copying the public ip of other instance.
To create a load balancer first we need to register the targets. Here we need to include the
above created instances in target groups.
Here we can see the two instances in the registered targets as a target group.
Go to Load balancers tab in the left side and create a load balancer with the name of AppLoad.
We need to select the available zones according to instance available zone. Select atleast
two available zones.
While creating load balancer we need to specify the instances to which our load should be
distributed. Am selecting the above created load balancer that is “LoadTarget”.
We can see the created load balance with the load balancer url and target group. To access
or check whether the load is distributing between the servers we need to use the url and
increase the load by refreshing the website.
Access the load balancer link over the browser and hit it a couple of times. Check if both the
webpages nginx and apache2 visible.
Here we can see the two web pages using the load balancer url by refreshing the web page
Launching two static websites using two servers.
Take any static website to the server using
wget url(your static website)
we will get the zip or tar file, now extract those files using
tar -xvf foldername
Now go inside the extracted folder we can see the files with the code, movie these files to
/var/www/html so that we can access using the website.
When we copy the load balancer url and paste in browser we can see these two static websites
by refreshing the page.
An Auto Scaling group is a feature in cloud computing that automatically adjusts the number
of running servers (instances) based on demand.
• If traffic increases, it automatically adds more servers to handle the load.
• If traffic decreases, it removes unnecessary servers to save costs.
Creating the auto scaling group by selecting Desired capacity with 1 and min desired capacity
with 1 and max capacity with 3.
Created autoscaling group by selecting the above created launch template with 1 running
instance(desired capacity) and we need 3 instances should be available everytime(max
capacity) in the region of us-east-1a.
Now try to change the maximum capacity and see the new instance should get created.
One ec2 instance is automatically launched by using auto scaling group. If we terminate this
instance we can see other instance is launching by the autoscaling group. That means one
server is always available even if there is heavy traffic or if there is no traffic.
Here we changed the autoscaling group with min capacity to 2 instances, maximum capacity
4 and we can see the two running instances.
We can see two instances are running after changing autoscaling desired capacity to 2
instances.
LAB:10: RDS
RDS (Relational Database Service) is a cloud service that makes it easy to set up, operate, and
manage a database.
• It automatically takes care of backups, security, and updates.
• You don’t have to worry about the server setup or maintenance.
• It supports popular databases like MySQL, PostgreSQL, Oracle, and SQL Server.
1. Provision an RDS instance
Launch a ec2 instance by allowing the mysql port number 3306 in security groups.
Creating a database with db.t3.micro and allow the ec2 instance to connect with the RDS
database.
Now the Database is in available state we can connect with our instance to access this
database by using the endpoint of the database, endpoint is like the address for your
database. By using this endpoint, your app can send queries to the database over the network.
Here I have connected the database to ec2 instance using the following command
mysql -h <endpoint of database> -u <username> -p
-h specifies the host address
-u specifies the username
-p is the password
Next enter the password that you have created while creating database. If you enter the
correct password you are successfully connected and can mysql prompt.
MINI PROJECT 2
LAB1: CREATING EC2 INSTANCE
1. Login to AWS Console
2. Create a server with Amazon Linux two/AMI/RHEL/Ubuntu
3. Connect the server only with putty/Gitbash
Created a ec2 instance with amazon linux and enabled ssh in the security group to connect
with the putty. Connected the above created ec2 instance with the putty using public ip and
private key pair, changed to root user using “super user do” command.
This shows the instance is created with the name Amazon-linux and it is terminated after
creating and completing work.
Creating a folder in local machine with the name Project. After creating changed to that folder
And installed and initialized git inside the folder to work on the git. Created a file and added
to staging area using <git add filename>. Staging area files and folders are tracked if there is
any changes made.
Before commiting the files from working directory check the status we can’t find any files on
the master branch. We can only see the tracking files need to be commit.
The file which is added to staging area are commited to repository using git commit command
with a message and file name.
<git commit -m “message” filename>
Repository which we created above is cloned into our local machine using <git clone project-
url> with this we download the repository and get all the files from the remote repository to
local machine.
❖ Now go to repository add some files using touch command it will create empty files in
working directory.
❖ Now add these files to staging area using < git add filename>
❖ We can track every change from staging area
❖ Now commit the changes from staging area to repository using <git commit -m
"provide any message" filenames>
❖ After commuting push the changes into remote repository using < git push origin
main>
❖ It will ask for username and password enter username and token
❖ If the repository is empty the changes will be pushed if it is not empty we need to pull
the remote repository.
Now go to your remote repository we can see the new changes like files.
Here we can see the remote repository with the change that is the files from local repository
is pushed to here.
LAB5: Pushing a locally created repo to Github:
Creating one remote repository with the same name as local repository in GitHub and do not
initiate it
Now from inside the locally created repo execute the following commands
➢ <git branch -M main> change the name of master branch to main in locally created
repo.
➢ <git remote add origin url> to add the remote repository in locally created repository
➢ <git push -u origin main> to push the changes from the local machine to remote
repository
Click on the main branch and type the name you want to create and you can see create new
branch from main.
To see the locally created branches and remotely created branches run this command
< git branch -a>
The names with red mark and remote/origin/ branch are the branches created in remote
repository and the names with green color ,* are the locally created branches.
Now checkout to the new branch using <git checkout branchname >
Make sure you are in the new branch by checking git status or git branch command
Now make some changes in the newly created branch using touch file3 file4
And push to the remote repository
We can see the files from local repository are pushed into remote repository using
<git push -u origin master>
LAB:8: Merger our feature branch with main branch:
Go to your GitHub repository and you can see some changes are made in newly created
branch
Go to" pull request" tab in the repository and create a pull request from master( new branch
)to main
To take the changes from master to main we need to specify the destination as main and
source as master
Now click on the pull request and it will ask for tittle of pull request just ignore that and click
on pull request.
Again go to pull request tag and click on. Pull request there we can find the pull request just
now created.
Now click on the created pull request, review the request and accept the request or merge
the request.
Now the files from the new branch is merged to main branch we can see this changes by
going to the repository page with the code button and there change the branch to main
branch see all the files.
Now we can see the new changes in the main branch of local repository.