0% found this document useful (0 votes)
15 views

AWS MINI Project

The document outlines a series of AWS mini-project labs focusing on various services such as Multi-Factor Authentication (MFA), IAM user creation, billing management, S3 bucket operations, EC2 instances, security groups, EBS volumes, AMIs, load balancers, auto-scaling groups, and RDS. Each lab provides step-by-step instructions for setting up and managing AWS resources, emphasizing security, cost management, and operational efficiency. The final lab highlights the benefits of using RDS for database management in the cloud.

Uploaded by

harshalmore.it
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

AWS MINI Project

The document outlines a series of AWS mini-project labs focusing on various services such as Multi-Factor Authentication (MFA), IAM user creation, billing management, S3 bucket operations, EC2 instances, security groups, EBS volumes, AMIs, load balancers, auto-scaling groups, and RDS. Each lab provides step-by-step instructions for setting up and managing AWS resources, emphasizing security, cost management, and operational efficiency. The final lab highlights the benefits of using RDS for database management in the cloud.

Uploaded by

harshalmore.it
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 63

AWS MINI PROJECT – I

LAB-1: I AM HANDS ON
To secure our aws account we need to setup “Multi Factor Authentication (MFA)”.
Follow the below steps to create a MFA

• Sign in to the AWS Console using root user sign in


• Go to security credentials(Using any service like I AM, EC2)
• We can see the Multi-Factor authentication (MFA) add MFA using this option.
• We can find three types of MFA Devices
1. Passkey or Security key
2. Authenticator App
3. Hardware TOTP token
• Select any one of the three options and set MFA for AWS Console

This is the dashboard of identity and access management and there is no MFA for this
account.
Am setting a MFA using Authenticator app. First we need to download the twilio authy
application in mobile and then create account in that app, scan the above QR code.

By scanning the QR code we can get a OTP to our registered mobile number enter the two
consecutive OTPs. Now your account is secured using Authenticator app.
We can see my Security Credentials like type of my MFA , created date and identifier. Now sign
out your account and try to login once again.

When I am trying to login am getting a verification message by asking multi-factor


authentication, open the twilio authy app and enter the security code.
Creating a user with the name of Bhargavi-user with only user access to aws management
console.

While creating user I have checked with the permissions of user, by default I Have taken AWS
Management Console so I don’t get any permissions.
With assigning only aws management console access we cannot access any other services,
we get error by saying you do not have any permissions to access this service.

We cannot access or launch ec2 instance without giving ec2 access permission.
I am adding permissions to user using attach policy directly. Attaching a policy with the name
“AmazonEc2fullAccess”.

Now am able to access the ec2 service by creating ec2 instance.


By assigning only EC2full permissions am anable to access other services, so we need to give
permission to these services if we want to use.

Here I attached a policy “AdministrativeAccess” to the user named Bhargavi-user. Now we will
check with the services.
After assigning Administrative access to the user I can able to access other services like Auto
scaling group,S3, Ec2ImageBuilder and EC2.
Final conclusion about this lab is we can able to create user and access all services using IAM
user using credentials of created user in IAM service, but we get the permissions to our IAM
user from root user(policies). Root user is having the authority to attach policy and remove
policy.
LAB-2: BILLING AND COST MANAGEMENT:
We get bills whenever we use the aws services so to limit the budget we use billing alarms.
Billing alarms triggers whenever we cross the limit that is set by us in budgets. Here we use
Billing and Cost Management to set the billing alarm.

This is the home page of billing and cost management, go to budgets to set billing alarm.

Here am creating template by selecting particular configurations. In this step am using Zero
spend budget that notifies us after we cross $0.01 which is above aws free tier.
Creating other budget alarm using template. Here we are setting billing alarm for monthly
usage using Monthly cost budget.

A monthly cost budget with $2.5 and the reciepent email. Here when we get notification
whenever we cross half or more than half of the monthly cost budget.
This is the setup with monthly Cost Budget and My Zero Spent Budget. Here it is showing “ok”
for monthly cost budget because we didn’t cross the amount $2.5, we cross the zero sprnt
budget so it is showing “Exceeded”.

Now to check where I have got bill, I need to select the bills option .
Inside the bills we can see the particular service with the amount we spent.

I have got the bills in EC2 with $0.80 because of using t2.medium and t2.large, For VPC with
$0.07 and total amount of $1.02.
LAB-3:S3 BUCKET:
S3 is a Simple Storage Service which is used to store the large amount of data using buckets
and objects.
Bucket:
We create buckets to store the data and the name of the bucket should be globally unique,
because to identify name of bucket by aws whenever we ask for data.
In buckets we store data using objects(files)
We store data in the form of files and folder. Here we can upload the files/folders directly from
local system or we create folders.
Step1: Creating S3 bucket

While creating S3 bucket disable ACLs and public access.


Created a bucket with the name of bhargavi-b-bhargavi in S3 using Create bucket button. Go
inside bucket and check whether there are any objects or not.
Uploading the files into the s3 bucket using upload option.

Successfully uploaded files into the s3 bucket.

Uploading folder into the s3 bucket.


To access the objects and folders in s3 bucket we have object url paste the url in browser.

We are getting error while trying to access the object.

We can’t access the object because we don’t have access permissions because public access
is blocked.
Enabling the Access Control lists.

Giving access to public by removing block public access.


We can access the objects by making them public. First we need to give public access and the
enable access control list(ACL), select the object which you want to make public.
Here we can also make the object public by generating policy using JSON statement.

Click on “make public” to that object to access that object.


After making object public using ACL am able to access the object.

If the data is deleted wantedly or by mistakenly the data is lost if versioning is disabled.
Versioning is used as backup of the stored s3 bucket. After enabling versioning if any data is
deleted we can restore them
If we made any change in the files and to store them in s3 we need to upload the files/folders
again it will not updated automatically.
Enabling the versioning we can able to save data from unintended data.

We can see the new option with the show versions which means versioning is enabled.
Created a text file and uploaded in s3 bucket with some content in the text file.

Updated and uploaded the text file and we can see the changes by observing the above and
this image.
Step3:

We can see a option of delete instead of delete maker.

Here it shows that versioning is enabled by seeing “show version” button. Here I have some
objects. Before selecting show version option I have deleted joel.txt file but it is not
permanently deleted we can identify this by seeing version objects. We can observe that the
file is present but if we delete the joel.txt(type-txt) it will be permanently deleted.
If we delete the file with “Delete marker” we can able to restore the objects.
LAB4:EC2 INSTANCE
Elastic Compute Cloud is a service which provides computational power(speed to access any
applications).
It will provide control over the virtual server, storage, security configurations and networking
settings.
To access an ec2 instance we need to provide security with inbound and outbound rules.
Inbound: Allowing the traffic coming into your EC2 instance from external sources.
Outbound: Traffic going from your EC2 instance to the outside source.

Creating a ec2 instance with ubuntu, select t2.micro as instance type for ec2 instance.

Allowing ssh(22) to connect with putty and with default storage of 8gb.
We can see the above created instance is in running state.

Connecting putty using public DNS and selecting the crentials of private key pair.
Accessing the local machine using putty using public ip, and private key pair and changed to
root user using sudo su – command.

LAB-5: SECURITY GROUP:


A security group is a virtual firewall in cloud environments (like AWS) that controls inbound
and outbound traffic to resources like EC2 instances.
To access any website, database we are having separate port numbers, once we allow the port
number we can access.

Creating a security group with the name of “mvnewsg”.


Created a security group with the name “mvnewsg” and allowed port numbers 22(Secure
shell) and 80(HTTP).
When I tried to change the cidr with 0.0.0.0/28 getting network error because it is invalid.
/28 tells how many addresses are in block and we get 16 ip addresses.

Unable to connect the instance in putty and command prompt due to the cider with
0.0.0.0/28
LAB-6: VOLUMES AND SNAPSHOTS:
Volumes are used to store the data. By default when we launch a instance we get a storage
volume as 8Gb but it is deleted when we delete the instance so data inside the volume is
deleted. We create EBS volume to store data/persist data even after the termination of the
instance. This can’t be deleted until we delete it. It is used as backup.
Volumes can be regional specific and we need attach to the volume with the same availability
zone. We can detach the volume from one instance and can be immediately attached to other
instance with the same availability zone.
After creating a EBS volume, we need to attach this to our instance so that the data can be
stored in this volume.

Here we can see the two volumes with 8GB which is a default volume created with the
instance and 5GB which is created manually to store data even after the instance termination.
We can see that the default volume and created volume is in the same availability zone(us-
eat-1a).

Check the volumes to see the EBS volume using list block devices and disk free
<df -h> using this we can see all the mounted file systems with human readble sizes.
<lsblk> with this we can see all the mounted and unmounted volumes and shows the device
path and the size.
we can find the volume of 5Gb and the path of volume is xvdi with the mount path.
<mkfs -t ext4 /dev/xvdi> is used to create a new filesystem on a specified disk partition or
volume.
Make filesystem using the type of file system with extended filesystem version 4 and the files
created on the specific path(/dev/xvdi).
Create a folder to mount/attach the volume into that folder. Use the below command to attach
the volume to created folder so it can be used as regular folder to store the files and the data.
<mount /dev/xvdi(path) /volume(foder)>
Go inside the created folder and create any files that will be store in EBS volume.

Changing the size of EBS volume to 8GB.


We can also increase the size of volume by modifying the size of the EBS volume. Here am
modifying the 5gb volume to 8gb volume. When I modify the volume it will be automatically
updated at the instance.

We can check the modification in the size of volume using list block devices and disk free.
Snapshot is a copy/backup of EBS volume. We can restore a snapshot to create a new EBS
volume in the same or another region, which is useful for disaster recovery, data migration, or
duplicating environments.

Create a snapshot from the previously created volume so that we can attach the volume in
the same or different availability zone.
Creating a new volume from snapshot with 8gb storage.

We can create a new volume from the snapshot so that we can change the availability zone of
the volume and attach to the instance of different availability zone.
We can see the volume which is created from snapshot is attached to our instance .

LAB:7:AMI
Amazon Machine Image is a pre-configured virtual image contains all the necessary
components to launch an ec2 instance. For example if we want to launch 10 ec2 instances
with same configuration we can create an AMI from first instance and launch other instances.

Creating a AMI from already launched ec2 instance.


First we need to launch a ec2 instance with specific configurations.
Stop the ec2 instance and create a Amazon Machine Image from the ec2 instance. Stopping
the instance is necessary for the data integrity. When an EC2 instance is running, data may be
in a volatile(it can change) state. Applications may be writing to disk, or processes may be
using in-memory data. If you create an AMI while the instance is running, you risk capturing
incomplete or inconsistent data.

Now the AMI consists all the configurations(operating system, storage, instance type) and
settings of that instance.
Here we can see the created AMI, by selecting the previously created image we can launch
any number of instances which requires the same configurations and reduce the time to
create instances.

LAB:8-LOAD BALANCER:
Load balancer is used to distributes the incoming traffic equally across the multiple servers.
To reduce the load on the server we use load balancer so that all the servers will be in available
state and preventing the failing of the server.
Create two ec2 instances and install nginx server on one machine and apache2 on server 2.
Creating two instances by allowing SSH and HTTP.

Install Httpd in one server and nginx in other server, start the two services.
Next copy the private ipaddresses of these instances and paste in the browser to access these
applications.
Install the httpd – apt install apache2 -y
Start the httpd by – systemctl start apache2
Install the nginx by using
apt install nginx -y
systemctl start nginx
Access both the server over browser and check if their web page is visible.

Here we can see the httpd home page by copying the public ip address. Without allowing http
port number we cannot access this webpages.

Here we can see the nginx home page by copying the public ip of other instance.
To create a load balancer first we need to register the targets. Here we need to include the
above created instances in target groups.

Here we can see the two instances in the registered targets as a target group.
Go to Load balancers tab in the left side and create a load balancer with the name of AppLoad.

We need to select the available zones according to instance available zone. Select atleast
two available zones.
While creating load balancer we need to specify the instances to which our load should be
distributed. Am selecting the above created load balancer that is “LoadTarget”.

We can see the created load balance with the load balancer url and target group. To access
or check whether the load is distributing between the servers we need to use the url and
increase the load by refreshing the website.
Access the load balancer link over the browser and hit it a couple of times. Check if both the
webpages nginx and apache2 visible.

Here we can see the two web pages using the load balancer url by refreshing the web page
Launching two static websites using two servers.
Take any static website to the server using
wget url(your static website)
we will get the zip or tar file, now extract those files using
tar -xvf foldername
Now go inside the extracted folder we can see the files with the code, movie these files to
/var/www/html so that we can access using the website.
When we copy the load balancer url and paste in browser we can see these two static websites
by refreshing the page.

LAB-9: ASG AND LAUNCH TEMPLATE:


Launch template is nothing but setting all the configurations related to the instance like
instance type, security group and storage options etc. in one template so that we can launch
as many servers as we want.
Launched a launch template selecting the option from left side and including the
configurations related to the ec2 instance and the template name is tempubuntu.

An Auto Scaling group is a feature in cloud computing that automatically adjusts the number
of running servers (instances) based on demand.
• If traffic increases, it automatically adds more servers to handle the load.
• If traffic decreases, it removes unnecessary servers to save costs.
Creating the auto scaling group by selecting Desired capacity with 1 and min desired capacity
with 1 and max capacity with 3.

Created autoscaling group by selecting the above created launch template with 1 running
instance(desired capacity) and we need 3 instances should be available everytime(max
capacity) in the region of us-east-1a.
Now try to change the maximum capacity and see the new instance should get created.
One ec2 instance is automatically launched by using auto scaling group. If we terminate this
instance we can see other instance is launching by the autoscaling group. That means one
server is always available even if there is heavy traffic or if there is no traffic.

Here we changed the autoscaling group with min capacity to 2 instances, maximum capacity
4 and we can see the two running instances.
We can see two instances are running after changing autoscaling desired capacity to 2
instances.

LAB:10: RDS
RDS (Relational Database Service) is a cloud service that makes it easy to set up, operate, and
manage a database.
• It automatically takes care of backups, security, and updates.
• You don’t have to worry about the server setup or maintenance.
• It supports popular databases like MySQL, PostgreSQL, Oracle, and SQL Server.
1. Provision an RDS instance

Launch a ec2 instance by allowing the mysql port number 3306 in security groups.

Creating a database with db.t3.micro and allow the ec2 instance to connect with the RDS
database.
Now the Database is in available state we can connect with our instance to access this
database by using the endpoint of the database, endpoint is like the address for your
database. By using this endpoint, your app can send queries to the database over the network.

Here I have connected the database to ec2 instance using the following command
mysql -h <endpoint of database> -u <username> -p
-h specifies the host address
-u specifies the username
-p is the password
Next enter the password that you have created while creating database. If you enter the
correct password you are successfully connected and can mysql prompt.
MINI PROJECT 2
LAB1: CREATING EC2 INSTANCE
1. Login to AWS Console
2. Create a server with Amazon Linux two/AMI/RHEL/Ubuntu
3. Connect the server only with putty/Gitbash

Created a ec2 instance with amazon linux and enabled ssh in the security group to connect
with the putty. Connected the above created ec2 instance with the putty using public ip and
private key pair, changed to root user using “super user do” command.
This shows the instance is created with the name Amazon-linux and it is terminated after
creating and completing work.

LAB2: CREATE REPO IN LOCAL MACHINE:


Git is a Source Code Management System developed by Linus Torvalds. It is called as
distributed version control system because it tracks(manages) every change in the code and
saves the versions of code like updates with the new features and rollback to previous version
if there are any errors.

Creating a folder in local machine with the name Project. After creating changed to that folder
And installed and initialized git inside the folder to work on the git. Created a file and added
to staging area using <git add filename>. Staging area files and folders are tracked if there is
any changes made.
Before commiting the files from working directory check the status we can’t find any files on
the master branch. We can only see the tracking files need to be commit.
The file which is added to staging area are commited to repository using git commit command
with a message and file name.
<git commit -m “message” filename>

LAB3: CREATING A REPO IN REMOTE LOCATION-GITHUB


Github is a project management tool, it stores and manages our project such as issues, pull
requests which helps to track our project. We are having two type of repositories in github .
Public repository:
Whatever code, files and folder we store in this repository can be seen and accessed by
everyone over the internet and they can fork this and use this project code.
Private Repository:
Whatever we code, files and folders we save in the repository cannot be accessible by people
until we share it with them.
Created a private repo with the name git-mini-project with readme.md file.

LAB4: WORKING WITH REMOTE REPO:


Take the remote repository to your local machine and make some changes by adding some
files and folders.
❖ After making changes push the changes into remote repository
❖ From remote machine copy the URL of repository to clone in the local machine using
<git clone url> we can see the URL at code button
❖ After cloning you will be asked username and password
❖ If you enter password and username you will get one error with authentication is
removed
❖ To solve the error we need to generate a token and keep in the place of password. As
we have created a private repository, to access private repository we need personal
token.
❖ Go to the setting on top right corner of your repository and go to the developer settings
and generate a personal classic token.
❖ Generate a token with no. of valid days and copy and save the token to access GitHub
for valid no. of days
❖ Now clone the repository in your local machine and enter the GitHub username and
the generated token in the place of password

Repository which we created above is cloned into our local machine using <git clone project-
url> with this we download the repository and get all the files from the remote repository to
local machine.
❖ Now go to repository add some files using touch command it will create empty files in
working directory.
❖ Now add these files to staging area using < git add filename>
❖ We can track every change from staging area
❖ Now commit the changes from staging area to repository using <git commit -m
"provide any message" filenames>
❖ After commuting push the changes into remote repository using < git push origin
main>
❖ It will ask for username and password enter username and token
❖ If the repository is empty the changes will be pushed if it is not empty we need to pull
the remote repository.
Now go to your remote repository we can see the new changes like files.

Here we can see the remote repository with the change that is the files from local repository
is pushed to here.
LAB5: Pushing a locally created repo to Github:
Creating one remote repository with the same name as local repository in GitHub and do not
initiate it
Now from inside the locally created repo execute the following commands
➢ <git branch -M main> change the name of master branch to main in locally created
repo.
➢ <git remote add origin url> to add the remote repository in locally created repository
➢ <git push -u origin main> to push the changes from the local machine to remote
repository

Created a local repository with the


LAB:6: Creating a New branch from your main branch:
Now go to your remote repository and click on the branch. In remote repository we are having
main as default branch.

Click on the main branch and type the name you want to create and you can see create new
branch from main.

Now a new branch with the given name master is created.


Make some changes in new branch and commit changes. The changes made in new branch
will not be applied in main branch.

LAB:7: Pull all the branches in your local machine:


Go to the local repository inside the local machine where are having remote repository as
origin .
Run the <git pull> command to bring the changes from remote repository to local repository.

To see the locally created branches and remotely created branches run this command
< git branch -a>
The names with red mark and remote/origin/ branch are the branches created in remote
repository and the names with green color ,* are the locally created branches.
Now checkout to the new branch using <git checkout branchname >
Make sure you are in the new branch by checking git status or git branch command
Now make some changes in the newly created branch using touch file3 file4
And push to the remote repository

We can see the files from local repository are pushed into remote repository using
<git push -u origin master>
LAB:8: Merger our feature branch with main branch:
Go to your GitHub repository and you can see some changes are made in newly created
branch

Go to" pull request" tab in the repository and create a pull request from master( new branch
)to main
To take the changes from master to main we need to specify the destination as main and
source as master
Now click on the pull request and it will ask for tittle of pull request just ignore that and click
on pull request.
Again go to pull request tag and click on. Pull request there we can find the pull request just
now created.

Now click on the created pull request, review the request and accept the request or merge
the request.
Now the files from the new branch is merged to main branch we can see this changes by
going to the repository page with the code button and there change the branch to main
branch see all the files.

LAB:9: Go to local machine:


Go to the local machine having remote repository.
Check out to the main branch <git checkout branchname>and pull the changes into local
repository.

Now we can see the new changes in the main branch of local repository.

You might also like