20121a3226 Internship Report
20121a3226 Internship Report
BACHELOR OF TECHNOLOGY
IN
COMPUTER SCIENCE AND ENGINEERING
Submitted by
Komma Manjunath Reddy
20121A3226
IV B.Tech II Semester
Under the esteemed supervision of
Dr. G. Sunitha
Professor
2023 - 2024.
SREE VIDYANIKETHAN ENGINEERING COLLEGE
(AUTONOMOUS)
Sree Sainath Nagar, A. Rangampet
Certificate
This is to certify that the internship report entitled “AI-ML Virtual Internship” is the
bonafide work done by Komma Manjunath Reddy ( Roll No:20121A3226 ) in the Department of
University Anantapur, Anantapuramu in partial fulfillment of the requirements for the award of
the degree of Bachelor of Technology in Computer Science during the academic year 2023-2024.
Head:
AWS Academy Cloud Foundations is intended for students who seek an overall understanding of
cloud computing concepts, independent of specific technical roles. It provides a detailed overview
of cloud concepts, AWS core services, security, architecture, pricing, and support. Machine
learning is the use and development of computer systems that can learn and adapt without following
explicit instructions, by using algorithms and statistical models to analyse and draw inferences from
patterns in data.
In this course, we will learn how to describe machine learning (ML), which includes how to
recognize how machine learning and deep learning are part of artificial intelligence, it also
describes artificial intelligence and machine learning terminology. Through this we can identify
how machine learning can be used to solve a business problem. We can also learn how to describe
the machine learning process in detail and the list the tools available to data scientists to identify
when to use machine learning instead of traditional software development methods. Implementation
of a machine learning pipeline, which includes learning how to formulate a problem from a
business request, obtain and secure data for machine learning, use Amazon Sage Maker to build a
Jupyter notebook, outline the process for evaluating data, explanation of why data must be pre-
processed. Using the open-source tools to examine and pre- process data. We can use Amazon Sage
Maker to train and host a machine learning model.
It also includes in the use of cross validation to test the performance of a machine learning model,
use of hosted model for inference and creating an Amazon Sage Maker hyperparameter tuning job
to optimize a model’s effectiveness. Finally, we will learn how to use managed Amazon ML
services to solve specific machine learning problems in forecasting, computer vision, and natural
language processing.
i
ACKNOWLEDGEMENT
We are extremely thankful to our beloved Chairman and founder Dr. M. Mohan
Babu who took keen interest to provide us the opportunity for carrying out the
project work.
We are very much obliged to Dr. B. Narendra Kumar Rao, Professor & Head,
Department of CSE, for providing us the guidance and encouragement in completion
of this work.
ii
TABLE OF CONTENTS
Title Page no
Abstract i
Acknowledgement ii
Table of contents iii-v
Abbreviations vii
Course 1 1-19
Course 2 20-44
Summary of Experience 45-46
Conclusion 49
References 50
iii
CONTENTS
COURSE: AWS CLOUD FOUNDATIONS
SNO CHAPTERS INTRODUCTION PG NO
Cloud Concepts Overview
• Introduction to cloud computing
Chapter 1 • Advantages of cloud computing 1-2
1.
• Introduction to Amazon Web
Services (AWS)
• AWS Cloud Adoption Framework
iv
Compute Compute services overview
•
Amazon EC2
•
Amazon EC2 cost optimization
6. Chapter 6 •
ContainerServices
•
Introduction to AWS Lambda 10-11
•
Introduction to AWS Elastic Beanstalk
Storage
• Amazon Elastic Block Store
(Amazon EBS)
• Amazon Simple Storage Service
7. Chapter 7 (Amazon S3)
• Amazon Elastic File System 12-13
(Amazon EFS)
• Amazon Simple Storage Service Glacier
Databases
• Amazon Relational Database Service
(Amazon RDS)
8. Chapter 8
• Amazon DynamoDB
• Amazon Redshift 14-15
• Amazon Aurora
Cloud Architecture
• AWS Well-Architected Framework
9. Chapter 9
• Reliability and high availability 16-17
• AWS Trusted Advisor
Auto Scaling and Monitoring
• Elastic Load Balancing
10. Chapter 10
• Amazon CloudWatch 18-19
• Amazon EC2 Auto Scaling
v
COURSE: MACHINE LEARNING FOUNDATIONS
vi
ABBREVATIONS
Abbreviation Full Form
EC2 Elastic Compute Cloud
S3 Simple Storage Service
VPC Virtual Private Cloud
RDS Relational Database Service
IAM Identity and Access Management
ELB Elastic Load Balancer
SQS Simple Queue Service
SNS Simple Notification Service
SES Simple Email Service
Lambda AWS Lambda (Serverless Compute Service)
KMS Key Management Service
CloudFront Amazon CloudFront (Content Delivery Network)
Route 53 Amazon Route 53 (Domain Name System Service)
EBS Elastic Block Store
AMI Amazon Machine Image
CLI Command Line Interface
SDK Software Development Kit
SaaS Software as a Service
PaaS Platform as a Service
IaaS Infrastructure as a Service
CORS Cross-Origin Resource Sharing
CDN Content Delivery Network
API Application Programming Interface
CFN AWS CloudFormation
CICD or CI/CD Continuous Integration / Continuous Deployment
AZ Availability Zone
API Gateway Amazon API Gateway
EKS Amazon Elastic Kubernetes Service
SFTP Secure File Transfer Protocol
VPN Virtual Private Network
vii
COURSE: AWS CLOUD FOUNDATIONS
CHAPTER : 1 CLOUD CONCEPTS OVERVIEW
Platform as a service (PaaS): PaaS cloud computing platform is created for theprogrammer to
develop, test, run, and manage the applications.
Software as a service (SaaS): SaaS is also known as "on-demand software”. Itis a software in
which the applications are hosted by a cloud service provider. Users can access these
applications with the help of internet connection and webbrowser.
Amazon Web Services (AWS) is a secure cloud platform that offers a broad setof global cloud-
based products. Because these products are delivered over the internet, you have on-demand
access to the compute, storage, network, database,and other IT resources that you might need for
your projects—and the tools to manage them.
The AWS Cloud Adoption Framework (AWS CAF) provides guidance and best
practices to help organizations identify gaps in skills and processes. It also helps organizations
build a comprehensive approach to cloud computing— both across the organization and
throughout the IT lifecycle—to acceleratesuccessful cloud adoption.
At the highest level, the AWS CAF organizes guidance into six areas of focus, called
perspectives. Perspectives span people, processes, and technology. Each perspective consists
of a set of capabilities,which covers distinct responsibilities that are owned or managed by
functionally related stakeholders.
2
CHAPTER 2: CLOUD ECONOMIC AND BILLING
1. Fundamentals of Pricing:
There are three fundamental drivers of cost with AWS: compute, storage, and outbound
data transfer. These characteristics vary somewhat, depending on the AWS product and pricing
model you choose.
Total Cost of Ownership (TCO) is the financial estimate to help identify directand indirect
costs of a system.
• To compare the costs of running an entire infrastructure environment
orspecific workload on-premises versus on AWS
• To budget and build the business case for moving to the cloud
3. AWS Organisations:
AWS Organizations is a free account management service that enables you to consolidate
multiple AWS accounts into an organization that you create and centrally manage.
3
AWS Organizations include consolidated billing and account management capabilities that help you
to better meet the budgetary, security, and compliance needs of your business. The main benefits of
AWS Organizations are:
AWS Billing and Cost Management is the service that you use to pay your AWSbill, monitor
your usage, and budget your costs. Billing and Cost Management enables you to forecast and
obtain a better idea of what your costs and usage might be in the future so that you can plan.
You can set a custom time period and determine whether you would like toview your data
at a monthly or daily level of granularity.
4
CHAPTER 3: GLOBAL INFRASTURUCTURE OVERVIEW
The AWS Global Infrastructure is designed and built to deliver a flexible, reliable, scalable,
and secure cloud computing environment with high-qualityglobal network performance.
AWS Global Infrastructure Map: https://aws.amazon.com/about-aws/global-
infrastructure/#AWS_Global_Infrastructure_MapChoose a circle on the map toview
summary information about the Region represented by the circle.
Regions and Availability Zones: https://aws.amazon.com/about-aws/global-
infrastructure/regions_az/Choose a tab to view a map of the selected geographyand a list of
Regions, Edge locations, Local zones, and Regional Caches.
2. AWS Service Overview:
5
CHAPTER 4: CLOUD SECURITY
1. AWS Shared Responsibility Model:
AWS responsibility:
• Which resources can be accessed and what can the user do to the resource
• How resources can be accessed.
3. Securing a New AWS Account:
AWS account root user access versus IAM access
6
Best practice: Do not use the AWS account root user except when necessary.
• Access to the account root user requires logging in with the email address(and
password) that you used to create the account.
Example actions that can only be done with the account root user:
• Only those who have the secret key can decode the data
• AWS KMS can manage your secret keys AWS supports encryption ofdata at rest.
• Data at rest = Data stored physically (on disk or on tape)
• You can encrypt data stored in any service that is supported by AWSKMS, including:
Amazon S3, Amazon EBS, Amazon Elastic File System (Amazon EFS), Amazon RDS
managed databases.
7
CHAPTER 5: NETWORKING AND CONTENT DELIVERY
1: Networking Basics:
Computer Network:
An interconnection of multiple devices, also known as hosts, that are connected using multiple
paths for the purpose of sending/receiving data or media. Computer networks can also include
multiple devices/mediums which help inthe communication between two different devices; these
are known as Network devices and include things such as routers, switches, hubs, and bridges.
8
VPC peering: Connects your VPC to other VPCs
VPC sharing: Allows multiple AWS accounts to create their applicationresources into shared,
centrally managed Amazon VPCs
AWS Site-to-Site VPN: Connects your VPC to remote networks
AWS Direct Connect: Connects your VPC to a remote network by using adedicated network connection
AWS Transit Gateway: A hub-and-spoke connection alternative to VPCpeering.
4. VPC Security:
Build security into your VPC architecture:
Isolate subnets if possible.
Choose the appropriate gateway device or VPN connection for yourneeds. Use firewalls.
Security groups and network ACLs are firewall options that you can use tosecure your VPC.
5. Amazon Router 53:
Is a highly available and scalable Domain Name System (DNS) webservice.
Is used to route end users to internet applications by translating names (like
www.example.com) into numeric IP addresses (like 192.0.2.1) thatcomputers use
to connect to each other.
Is fully compliant with IPv4 and IPv6.
6. Amazon Cloud Front:
• Fast, global, and secure CDN service.
• Global network of edge locations and regional edge caches.
• Self-service model.
• Pay-as-you-go pricing.
9
CHAPTER 6: COMPUTE
1. Compute Services Overview:
2. Amazon EC2:
• You can launch instances of any size into an Availability Zone anywherein the world.
• Launch instances from Amazon Machine Images (AMIs).
• Launch instances with a few clicks or a line of code, and they are ready inminutes.
• You can control traffic to and from instances.
10
3. Amazon EC2 Cost Optimization:
• Repeatable.
• Self-contained environments.
• Software runs the same in different environments.
• Developer's laptop, test, production.
• Faster to launch and stop or terminate than virtual machines.
5. Introduction to AWS Lambda: It is a serverless computing service.
11
CHAPTER 7: STORAGE
1. Amazon Elastic Block Store (Amazon EBS):
Amazon EBS enables you to create individual storage volumes and attach themto an
Amazon EC2 instance:
• Backup and storage –Provide data backup and storage services for others
• Application hosting –Provide services that deploy, install, and
manageweb applications
12
• Compatible with all Linux-based AMIs for Amazon EC2.
13
4. Amazon Simple Storage Service Glacier:
• Amazon S3 Glacier is a data archiving service that is designed
forsecurity, durability, and an extremely low cost.
14
CHAPTER 8: DATABASES
1. Amazon Relational Databases Service:
Amazon RDS is a web service that makes it easy to set up, operate, and scale a relational
database in the cloud. It provides cost-efficient and resizable capacity while managing time-
consuming database administration tasks so you can focuson your applications and your
business. Amazon RDS is scalable for compute and storage, and automated redundancy and
backup is available. Supported database engines include Amazon Aurora, PostgreSQL, MySQL,
MariaDB,Oracle, and Microsoft SQL Server.
2. Amazon Dynamo DB:
Fast and flexible NoSQL database service for any scale.
16
CHAPTER 9: CLOUD ARCHITECTURE
1. AWS Well Architected Framework:
A guide for designing infrastructures that are:
✓ Secure
✓ High performing
✓ Resilient ✓ Efficient
Cost Optimization–AWS Trusted Advisor looks at your resource use and makes recommendations
to help you optimize cost by eliminating unused and idle resources, or by making commitments to
reserved capacity.
17
Performance–Improve the performance of your service by checking yourservice limits,
ensuring you take advantage of provisioned throughput, andmonitoring for overutilized
instances.
Security–Improve the security of your application by closing gaps, enabling variousAWS security
features, and examining your permissions.
Fault Tolerance–Increase the availability and redundancy of your AWS application by taking
advantage of automatic scaling, health checks, multi-AZdeployments, and backup capabilities.
Service Limits–AWS Trusted Advisor checks for service usage that is more than80percentof the
service limit. Values are based on a snapshot, so your current usage might differ. Limit and usage
data can take up to 24 hours to reflect any changes.
18
CHAPTER 10: AUTO SCALING AND MONITORING
1. Elastic Load Balancing:
2. Amazon CloudWatch:
• Amazon CloudWatch helps you monitor your AWS resources—and
theapplications that you run on AWS—in real time.
CloudWatch enables you to –
20
COURSE: MACHINE LEARNING FOUNDATIONS CHAPTER
1: INTRODUCING TO MACHINE LEARNING
• Deep learning is a technique that was inspired from human biology. It useslayers
of neurons to build networks that solve problems.
• Advancements in technology, cloud computing, and algorithm development
have led to a rise in machine learning capabilities andapplications.
2. Business Problems Solved with Machine Learning Machine learning is used
throughout a person’s digital life. Here are some examples:
• Spam –Your spam filter is the result of an ML program that was trainedwith
examples of spam and regular email messages.
• Recommendations –Based on books that you read or products that you buy, ML
programs predict other books or products that you might want.Again, the ML
program was trained with data from other readers’ habitsand purchases.
• Credit card fraud –Similarly, the ML program was trained on examples
oftransactions that turned out to be fraudulent, along with transactions that were
legitimate.
Machine learning problems can be grouped into –
• Supervised learning: You have training data for which you know theanswer.
• Unsupervised learning: You have data, but you are looking for insightswithin
the data.
21
• Reinforcement learning: The model learns in a way that is based
onexperience and feedback.
• Most business problems are supervised learning.
3. Machine Learning Process
The machine learning pipeline process can guide you through the process oftraining
and evaluating a model.
The iterative process can be broken into three broad steps –
• Data processing
• Model training
• Model evaluation
ML PIPELINE:
• Jupyter Notebook is an open-source web application that enables you tocreate and
share documents that contain live code, equations, visualizations, and narrative text.
23
CHAPTER 2: IMPLEMENTING A MACHINE LEARNING PIPELINE
WITH AMAZON SAGEMAKER
1. Formulating Machine Learning Problems
24
Securing Data
3. Evaluating Data
• Descriptive statistics can be organized into different categories. Overallstatistics
include the number of rows (instances) and the number of columns (features or
attributes) in your dataset. This information, whichrelates to the dimensions of your
data, is important. For example, it can indicate that you have too many features,
which can lead to high dimensionality and poor model performance.
• Attribute statistics are another type of descriptive statistic, specifically for numeric
attributes. They give a better sense of the shape of your attributes, including
properties like the mean, standard deviation, variance,minimum value, and maximum
value.
• Multivariate statistics look at relationships between more than one variable, such
as correlations and relationships between your attributes.
4. Feature Engineering
Feature selection is about selecting the features that are most relevant and discarding the
25
rest. Feature selection is applied to prevent either redundancy orirrelevance in the existing
features,
26
or to get a limited number of features to prevent overfitting.
Feature extraction is about building up valuable information from raw data by reformatting,
combining, and transforming primary features into new ones. This transformation continues
until it yields a new set of data that can be consumed bythe model to achieve the goals.
Outliers
During feature engineering. You can handle outliers with several differentapproaches.
They include, but are not limited to:
• Deleting the outlier: This approach might be a good choice if your outlieris based on
an artificial error. Artificial error means that the outlier isn’t natural and was introduced
because of some failure—perhaps incorrectly entered data.
• Transforming the outlier: You can transform the outlier by taking the natural log of
a value, which in turn reduces the variation that the extremeoutlier value causes.
Therefore, it reduces the outlier’s influence on the overall dataset.
• Imputing a new value for the outlier: You can use the mean of the feature, for
instance, and impute that value to replace the outlier value.Again, this would be
a good approach if an artificial error caused the outlier.
• Chi-square–Is a single number that tells how much difference exists between
your observed counts and the expected counts, if no relationshipexists in the
population.
27
• Forward selection starts with no features and adds them until the bestmodel is
28
found.
• Backward selection starts with all features, drops them one at a time, andselects the
best model.
Feature Selection: Embedded Methods
Embedded methods combine the qualities of filter and wrapper methods. Theyare
implemented from algorithms that have their own built-in feature selectionmethods.
Some of the most popular examples of these methods are LASSO and RIDGE
regression, which have built-in penalization functions to reduce overfitting.
5. Training
Holdout technique and k-fold cross validation methods are the most used oneswhen the data
is to be classified as test set and training set.
XGBOOST ALGORITHM
29
accurately
30
predict a target variable. It attains its prediction by combining an ensemble of estimates from
a set of simpler, weaker models.
XGBoost has done well in machine learning competitions. It robustly handles various data
types, relationships, and distributions, and the many hyperparameters that can be tweaked
and tuned for improved fit. This flexibilitymakes XGBoost a solid choice for problems in
regression, classification (binaryand multiclass), and ranking.
LINEAR LEARNER
The Amazon SageMaker linear learner algorithm provides a solution for bothclassification and
regression problems.
With the Amazon SageMaker algorithm, you can simultaneously explore different training
objectives and choose the best solution from your validation set. You can also explore
many models and choose the best one for your needs.
31
6. Hosting and Using the Model
• You can deploy your trained model by using Amazon SageMaker to handle API
calls from applications, or to perform predictions by using abatch transformation.
32
COMPARISION OF MODELS
SENSITIVITY
SPECIFICITY
33
OTHER CLASSIFICATION METRICS
HYPERPARAMETER TUNING:
34
• Someone—who had domain experience that was related to that
hyperparameter and use case—would manually select the hyperparameters,
according to their intuition and experience.
• Then, they would train the model and score it on the validation data.This
process would be repeated until satisfactory results were achieved.
• This process is not always the most thorough and efficient way oftuning
your hyperparameters.
35
CHAPTER 3: INTRODUCING FORECASTING
1. OVERVIEW OF FORECASTING
Forecasting is an important area of machine learning. It is important because
somany opportunities for predicting future outcomes are based on historical data.
36
consider smoothing for the following reasons.
Import your data –You must import as much data as you have—both historical data
and related data. You should do some basic evaluation andfeature engineering before
you use the data to train a model.
Train a predictor –To train a predictor, you must choose an algorithm. If you are not sure
which algorithm is best for your data, you can let AmazonForecast choose by selecting
Auto ML as your algorithm. You also must select a domain for your data, but if you’re not
sure which domain fits best,you can select a custom domain. Domains have specific types
of data that they require. For more information, see Predefined Dataset Domains and
Dataset Types in the Amazon Forecast documentation.
Generate forecasts –As soon as you have a trained model, you can use themodel to make
a forecast by using an input dataset group. After you generate a forecast, you can query the
forecast, or you can export it to an Amazon Simple Storage Service (Amazon S3) bucket.
You also have the option to encrypt the data in the forecast before you export it.
38
CHAPTER 4: INTRODUCING COMPUTER VISION
1. Computer Vision enables machines to identify people, places, and thingsin images
with accuracy at or above human levels, with greater speed and efficiency. Often built with
deep learning models, computer vision automates the extraction, analysis, classification,
and understanding of useful information from a single image or a sequence of images. The
image data can take many forms, such as single images, video sequences, views from
multiple cameras, or three-dimensional data.
Computer vision with image and facial recognition can help to quickly identify unlawful
entries or persons of interest. This process can result insafer communities and a more
effective way of deterring crimes.
Authentication and enhanced computer-human interaction
Enhanced human-computer interaction can improve customer satisfaction. Examples
include products that are based on customer sentiment analysis inretail outlets or
faster banking services with quick authentication that is based on customer identity
and preferences.
Content management and analysis
Millions of images are added every day to media and social channels. Theuse of
computer vision technologies—such as metadata extraction and image classification—
can improve efficiency and revenue opportunities. Autonomous driving
By using computer-vision technologies, auto manufacturers can provideimproved and safer
39
self-driving car navigation, which can help realize autonomous driving and make it a
reliable transportation option.
Medical imaging
Medical image analysis with computer vision can improve the accuracy andspeed of a
patient's medical diagnosis, which can result in better treatment outcomes and life expectancy.
Manufacturing process control Well-trained computer vision that is incorporated into robotics
can improve quality assurance and operational efficiencies in manufacturing applications. This
process can result in more reliable and cost-effective products.
Problem 01: Recognizing food & state whether it’s breakfast or lunch
or dinner
40
As the CV classified the objects as milk, peaches, ice cream,
salad, nuggets, bread roll thus it’s a breakfast.
41
2. Image and Video Analysis
Amazon Recognition is a computer vision service based on deep learning.You can use it
to add image and video analysis to your applications.
Amazon Recognition enables you to perform the following types of analysis:
Searchable image and video libraries–Amazon Recognitionmakes images and stored
videos searchable so that you can discover theobjects and scenes that appear in them.
Face-based user verification–Amazon Recognition enables your applications to confirm
user identities by comparing their live image with areference image. Sentiment and
demographic analysis–Amazon Recognition interprets emotional expressions, such as
happy, sad, or surprise. It can also interpret demographic information from facial images,
such as gender.
Unsafe content detection–Amazon Recognition can detect inappropriatecontent in
images and in stored videos.
42
CASE 03: Sentiment Analysis
43
STEP 02: Create Training Dataset
44
STEP 05: Evaluate
45
CHAPTER – 6 INTRODUCING NATURAL LANGUAGE PROCESSING.
1. Overview of Natural Language Processing
Discovering the structure of the text –One of the first tasks of any NLP application is
to break the text into meaningful units, such as words, phrases, and sentences.
Labelling data –After the system converts the text to data, the next challenge is to apply
labels that represent the various parts of speech. Everylanguage requires a different
labelling scheme to match the language’s grammar. Representing context –Because word
meaning depends on context, any NLP system needs a way to represent context. It is a big
challenge because of the large number of contexts.
Applying grammar –Dealing with the variation in how humans uselanguage is a
major challenge for NLP systems.
NLP FLOW CHART:
46
2. Natural Language Processing Managed services
USES:
• Medical transcription
• Subtitle in streaming content and in offline content
Uses:
International Websites
Software Localization
47
Uses:
Document Analysis
Fraud Detection
Uses:
Interactive Assistants
Database Queries
48
SUMMARY OF EXPERIENCE
During my internship with Amazon Web Services (AWS), I embarked on a comprehensive journey into
the realm of cloud computing, exploring the vast capabilities of the AWS platform. This internship,
conducted in partial fulfillment of the requirements for my Bachelor of Technology degree in Computer
Science and Engineering, provided invaluable insights and hands-on experience in deploying and
managing cloud-based solutions.
The internship began with an introduction to AWS Cloud, where I gained a profound understanding of
its key components and services. I delved into compute services like Amazon EC2 and serverless
computing with AWS Lambda. Storage services such as Amazon S3 and database management through
Amazon RDS and DynamoDB were explored. The networking capabilities, analytics and big data
services, AI and machine learning tools, as well as security and identity features were thoroughly
examined.
Practical implementation played a pivotal role in enhancing my skills. I created a virtual private cloud
(VPC) and configured subnets, launched EC2 instances, and set up web servers with Linux scripts. The
experience of hosting a website on Amazon S3 and creating a book catalog with dynamic content added
a real-world dimension to my learning.
The internship extended into areas like setting up a DynamoDB table and performing operations,
configuring MariaDB on server-oriented instances for DDL and DML testing, and automating EC2
start/stop processes using Lambda functions. The implementation of Elastic Load Balancer (ELB) for
traffic load balancing showcased the importance of high availability in cloud environments.
One of the highlights of my internship was the exploration of Auto Scaling for web servers. Creating
launch templates, defining scaling policies based on CPU utilization, and configuring the automatic
initialization of resources inline with Auto Scaling policies provided a hands-on understanding of
dynamic resource allocation.
49
The journey concluded with reflections on my learning experiences and a comprehensive summary of
the internship. Throughout this internship, I not only acquired technical skills but also developed a
deeper appreciation for the scalability, flexibility, and efficiency that cloud computing offers.
In conclusion, the AWS Cloud Virtual Internship has been a transformative experience, equipping me
with the knowledge and skills essential for navigating the dynamic landscape of cloud computing. The
exposure to real-world scenarios and practical implementations has prepared me for the challenges and
opportunities that lie ahead in the field of technology.
50
REFLECTION ON LEARNING
Undertaking the AWS Cloud Virtual Internship has been a transformative experience that has
significantly broadened my understanding of cloud computing and its practical applications. This
internship provided an immersive environment to explore the extensive capabilities of Amazon Web
Services (AWS), a leading cloud computing platform.
Through hands-on tasks and projects, I delved into various AWS services, including EC2, S3,
DynamoDB, Lambda, CloudWatch, and Elastic Load Balancer (ELB). The step-by-step procedures for
setting up and configuring these services enhanced my technical skills and deepened my comprehension
of cloud architecture.
One of the notable aspects of this internship was the practical exposure to real-world scenarios. From
setting up a simple web server to implementing complex solutions like auto-scaling and load balancing,
each task contributed to a holistic understanding of cloud infrastructure management. The emphasis on
creating a comprehensive report also improved my documentation and reporting skills.
The experience of automating processes using Lambda functions, integrating services for seamless
workflows, and setting up monitoring and alerting systems in CloudWatch has been particularly
insightful. These skills are not only valuable in the context of AWS but are transferable to other cloud
platforms, reinforcing the versatility of the knowledge gained.
51
Furthermore, the internship exposed me to database management with DynamoDB and MariaDB,
offering a practical perspective on handling data in cloud environments. Creating DynamoDB tables,
executing queries, and managing data in MariaDB provided a solid foundation in database operations.
Implementing Elastic Load Balancer (ELB) for traffic load balancing and auto-scaling for web servers
deepened my understanding of ensuring high availability and scalability in cloud applications. The
experience of creating Launch Templates, Auto Scaling Groups, and defining policies for scaling based
on CPU utilization was instrumental in mastering these advanced concepts.
In conclusion, the AWS Cloud Virtual Internship has been an enriching journey that transcended
theoretical knowledge to practical proficiency. The skills acquired are not only relevant in the context of
cloud computing but also align with industry demands for professionals well-versed in cloud
technologies. This internship has ignited a passion for continuous learning and exploration in the
dynamic field of cloud computing.
52
CONCLUSION
These CHAPTER s described how model explain ability relates to AI/ML solutions, giving customers
insight to explain ability requirements when initiating AI/ML use cases. Using AWS, four pillars were
presented to assess model explain ability options to bridge knowledge gaps and requirements for simple
to complex algorithms. To help convey how these models explain ability options relate to real-world
scenarios, examples froma range of industries were demonstrated. It is recommended that AI/ML
owners or business leaders follow these steps when initiating a new AI/ML solution:
Collect business requirements to identify the level of explain abilityrequired for your business to accept
the solution.
Work with an AI/ML technician to communicate model explain ability assessment and find the optimal
AI/ML solution to meet yourbusiness objectives.
After the solution is completed, revisit the model explain ability assessment to evaluate that business
requirements are continuouslymet.
By taking these steps, we will mitigate regulation risks and ensure trust in our model. With this trust,
when the event comes to push yourAI/ML solution into an AWS production environment, we will be
ready to create business value for our use case
53
REFERENCES
1. AWS
Documentation:
- https://docs.aws.amazon.com/
-https://aws.amazon.com/training/
- https://aws.amazon.com/architecture/well-architected/
4. AWS Whitepapers:
- https://aws.amazon.com/whitepapers
5. AWS Blogs:
- https://aws.amazon.com/blogs/
-https://www.youtube.com/user/AmazonWebServices
- https://github.com/aws-samples
- https://aws.amazon.com/architecture/
9. A Cloud Guru:
- https://acloudguru.com/
-https://aws.amazon.com/cloudformation/aws-cloudformation-templates/
- https://aws.amazon.com/serverless/
-https://aws.amazon.com/solutions/
54