Getting Started With Aws in Node
Getting Started With Aws in Node
[email protected]
Getting Started with Amazon Web Services
(AWS) in Node.js
StackAbuse
© 2020 StackAbuse
Copyright © by StackAbuse.com
Authored by Joshua Simpson, David Landup, Robley Gori
Contributions by Janith Kasun
Edited by David Landup
Cover design and illustrations by Jovana Ninković
The images in this book, unless otherwise noted, are the copyright of StackAbuse.com.
The scanning, uploading, and distribution of this book without permission is a theft of the content
owner’s intellectual property. If you would like permission to use material from the book (other than
for review purposes), please contact [email protected]. Thank you for your support!
First Edition: November 2020
Published by StackAbuse.com, a subsidiary of Unstack Software LLC.
The publisher is not responsible for links, websites, or other third-party content that are not owned
by the publisher.
Contents
2. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
9. Database Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
We’ve compiled this beginner-level book aiming at novice developers who’d like to get to know AWS
through concrete, practical examples and useful tips. We’ll be covering S3, EC2, Lambda, SNS, SQS
and RDS in an attempt to get our readers up to speed with some of AWS’ most popular services.
Each chapter will feature an introduction to the topic and a problem that the relevant service solves,
followed by a setup and a demo application.
By the end of this book, you should be able to provision scalable AWS-integrated JavaScript
applications with a solid knowledge foundation and to get up to speed with any of the other services
offered by this tech-giant.
¹https://aws.amazon.com/
2. Prerequisites
Node.js
Needless to say, to follow the contents of the book, you’ll want to have Node.js installed on your
machine. We’ll be using npm to install modules such as Express that’ll help us build demonstration
applications faster and easier.
Amazon Web Services (AWS) provides a collection of tools for building applications in the cloud. To
interact with them and use them, you’ll obviously need an AWS account.
If you don’t already have one, head over to the AWS frontpage and sign up for an account. Depending
on your usage, select either a Professional or Personal account. If you intend to use it within a
company or an educational institution/organization - you should select the Professional option.
AWS has a free tier² for a lot of awesome stuff! DynamoDB, AWS Lambda, Amazon SNS, Glacier,
SES, etc. are always free, though some monthly limitations are imposed.
Services like EC2, S3, RDS, API Gateway, Cloud Directory, etc. are free for 12 months, with similar
limitations of monthly usage.
And finally, services like SageMaker, Lightsail, GuardDuty, etc. offer a free trial, after which you’ll
have to open your wallet to continue using them.
Connecting an AWS account with our applications can be done in several ways - directly through
credentials, through an IAM user, via the CLI…
For each application, we’ll use a different way to connect an application to our account. Feel free to
use the one you prefer, as they’re interchangeable.
PLEASE NOTE: AWS, like all software, is continually being updated, both in terms of
design and functionality. We’ve done our best to use the latest versions and updates in
this book.
The user interfaces you’ll be seeing on your machine, at the time of reading, may be
slightly or significantly different from the interfaces seen in the book. We’ll update the
book when a new interface is released in an attempt to keep it as up-to-date as we can.
²https://aws.amazon.com/free/
2. Prerequisites 3
Postman³ is a useful tool for creating and sending requests. We’ll be using it in some of the chapters
to test out our endpoints.
Postman is optional, and really, you can use any tool to test out the endpoints, even your internet
browser. We’ll also be using curl in some chapters, due to its simplicity.
Docker
Docker⁴ allows us to bundle up our applications into small, easily deployable units that can be
run anywhere where Docker is installed. This means no more of ‘but it works on my machine!’
headaches.
This book will assume basic familiarity with Docker, and won’t be going into any depth on it.
³https://www.postman.com/
⁴https://www.docker.com/
3. Cloud File Hosting
Much of the software and web apps we build today require some kind of hosting for files - images,
invoices, audio files, etc. The traditional way to store files was just to just save them on the server’s
drives.
This requires our server’s drives to have a size large enough to reasonably be able to hold the data
we might want it to save.
Once the drives are filled up, we’d have to manually insert new drives, or pay a service provider for
more space.
This introduces us to a step-like upgrading path, where an investment is made for the initial size,
then for each new size upgrade, an investment is required which will suffice for a set amount of
time. This makes scaling hard and expensive. Much thought would have to go into how much data
can be saved, how the data is saved and how it is handled.
This also requires the developers to dabble with the file system, which might change between servers
which further complicates things.
Additionally, what if these files are large? It’s not uncommon to save and serve large images, video
or audio files. Storing a large amount of these can be fixed with expensive hardware upgrades, such
as installing more storage drives. However, what happens when the end-user wants to access this
data?
An index page of a website that just serves a static file can be as small as 1KB. Larger files with
hundreds of lines, that also import various JavaScript files of medium to large size can easily amount
to ∼100KB in size.
That size, compared to a 5MB size of a high quality image is 500 times smaller.
Loading an image such as that, alongside a small HTML file can lead to slow loading times and bad
user experience, even if we load the image lazily while the user is already on the page:
3. Cloud File Hosting 5
This introduces an unproportionable and unnecessary strain on the server’s resources. The same
amount of resources that are required to serve a single large image can be used to serve hundreds
of pages to other users.
To offload the servers, developers started hosting files with cloud storage providers such as AWS S3,
Google Cloud Storage, etc.
These services allow developers to ditch having to store the files themselves, handle their structures
and mess around with the file system.
Instead, the file hosting service takes care of this for them. Additionally, a huge amount of resources
is released back into the server and application, which can then run faster and serve other, smaller
files:
Another significant benefit is security. If these files contain sensitive data, a developer will spend a
long time securing the file storage system. This takes away time from development, and focuses it
on things that a developer doesn’t need to take care of to get the application up and running.
4. AWS Simple Storage Service (S3)
Amazon’s solution to file hosting is AWS Simple Storage Service, most often referred to as S3.
S3, or Simple Storage Service, is a cloud storage service provided by Amazon Web Services. Using S3,
you can host any number of files while paying for only what you use.
S3 also provides multi-regional hosting to customers by their region and thus are able to really
quickly serve the requested files with minimum delay.
The service is based on Buckets - which are analogous to having a traditional server. Each of these
buckets can be set up in a different region to optimize speed.
AWS Credentials
To get started, you need to generate the AWS Security Key Access Credentials first. Using them,
you’ll give your application access to your account. You can do this through an IAM (Identity and
Access Management) user or directly. We’ll be setting up an IAM in the next chapter, so for this
application, you’ll directly get access to your account.
To do so, login to your AWS Management Console and click on your username:
4. AWS Simple Storage Service (S3) 7
After that you can either copy the Access Key ID and Secret Access Key from this window or you
can download it as a .CSV file:
4. AWS Simple Storage Service (S3) 8
Creating an S3 Bucket
Now let’s create an AWS S3 Bucket with proper access. We can do this using the AWS management
console or by using Node.js.
To create an S3 bucket using the management console, go to the S3 service by selecting it from the
service menu:
Select “Create Bucket” and enter the name and region for it. If you already know from which region
the majority of your users will come from, it’s wise to select a region as close to theirs as possible.
This will ensure that the files from the server will be served in a more optimal timeframe.
4. AWS Simple Storage Service (S3) 9
The name you select for your bucket must be a unique name among all AWS users, so try a new one
if the name is not available:
Follow through the wizard and configure permissions and other settings per your requirements. By
default, you’ll have some options turned on or off. These are options such as blocking public access
and bucket versioning.
To create the bucket using Node.js, we’ll first have to set up our development environment.
Development Environment
1 $ npm init -y
To start using any AWS Cloud Services in Node.js, we have to install the AWS SDK (System
Development Kit).
Install it using your preferred package manager like npm:
Once aws-sdk is installed, it’ll be automatically added to the dependencies section of our package.json
file thanks to the --save option.
4. AWS Simple Storage Service (S3) 10
Demo Application
Creating an S3 Bucket
If you have already created a bucket manually, you may skip this part. But if not, create a file, say,
create-bucket.js in your project directory.
Import the aws-sdk library to access your S3 bucket:
Now, let’s define three constants to store the ID, SECRET, and BUCKET_NAME. These are used to identify
and access our bucket:
With the S3 interface successfully initialized, we can go ahead and create the bucket:
1 const params = {
2 Bucket: BUCKET_NAME,
3 CreateBucketConfiguration: {
4 // Set your region here
5 LocationConstraint: 'us-east-1'
6 }
7 };
8
9 let promiseResult = s3.createBucket(params).promise();
10
11 promiseResult.then(data => {
12 console.log('Bucket Created Successfully', data.Location);
13 }).catch(err => {
14 console.error(err, err.stack);
15 });
4. AWS Simple Storage Service (S3) 11
The data object contains useful information about the object (in this case, Bucket) we’re working
with. Here, we’ve extracted the Location parameter, which holds the URL of the bucket we’ve
created.
You can use a callback instead of promises here as well.
At this point we can run the code and test if the bucket is created on the cloud:
1 $ node create-bucket.js
If the code execution is successful you should see the success message, followed by the bucket address
in the output:
You can visit your S3 dashboard to verify that the bucket is created:
Uploading Files
At this point, let’s implement the file upload functionality. In a new file, e.g. upload.js, import the
aws-sdk library to access your S3 bucket and the fs module to read files from your computer:
1 const fs = require('fs');
2 const AWS = require('aws-sdk');
We need to define three constants to store again - ID, SECRET, and BUCKET_NAME and initialize the S3
client as we did before.
Now, let’s create a function that accepts a fileName parameter, representing the file we want to
upload:
4. AWS Simple Storage Service (S3) 12
What we’ve done here is read the file’s contents as a buffer. After reading it, we can define the
needed parameters for the file upload, such as Bucket, Key, and Body - passing the buffer with the
file contents as the body.
Besides these three parameters, there’s a long list of other optional parameters. To get an idea of the
things you can define for a file while uploading, here are a few useful ones:
• StorageClass: Define the class you want to store the object. S3 is intended to provide fast file
serving. But in case files are not accessed frequently you can use a different storage class. For
an example, if you have files which are hardly touched you can store in “S3 Glacier Storage”
where the price is very low compared to “S3 Standard Storage”. But it will take more time to
access those files in case you need it and is covered with a different service level agreement.
• ContentType: Sets the image MIME type. The default type will be binary/octet-stream.
Adding a MIME type like image/jpeg will help browsers and other HTTP clients to identify
the type of the file.
• ContentLength: Sets the size of the body in bytes, which comes in handy if body size cannot
be determined automatically.
• ContentLanguage: Set this parameter to define which language the contents are in. This will
also help HTTP clients to identify or translate the content.
For the Bucket parameter, we’ll use our bucket name, whereas for the Key parameter we’ll add the
file name we want to save as, and for the Body parameter, we’ll use fileContent.
With that done, we can upload any file by passing the file name to the function:
4. AWS Simple Storage Service (S3) 13
1 uploadFile('cat.jpg', BUCKET_NAME);
You can replace cat.jpg with a file name that exists in the same directory as the code, a relative file
path, or an absolute file path.
At this point, we can run the code and test out if it works:
1 $ node upload.js
If everything is fine, you should see an output with the location of the data you just uploaded:
Download Files
Sometimes, you might want to download a file from an S3 bucket. This may be done for additional
processing or editing on the file, after which, you’d upload it again, or store it on another service or
computer.
Let’s make a downloadFile() function for this:
4. AWS Simple Storage Service (S3) 14
Running this code will download the file with the corresponding key from the bucket specified by
the bucketName, and save that file on the filePath.
If we call:
We’ll be greeted with our cat.jpg file that we uploaded in the previous step, located in the project
folder, named - cat.jpg. You can put a more elaborate filePath if you’d like.
We’re also greeted with the output:
As you can see, the file is downloaded as a buffer, which we can use with fs to write it into a file.
4. AWS Simple Storage Service (S3) 15
Deleting Files
Similar to the previous two functions, with the same imports, we’ll also define a deleteFile()
function. It follows the same format - we specify the bucket we’re working with, the key we want
to delete and set up the required parameters for the s3 object.
Then, all we have to do is call the deleteObject() function, with that information:
Now, if we wanted to delete out cat.jpg file from the bucket, we could call the function:
1 deleteFile(BUCKET_NAME, 'cat.jpg')
event illustration
By contrast, a command has a clear target or multiple targets. The commands are sent off to
individual recipients who then respond to that command:
command illustration
6. AWS Simple Notification Service
(SNS)
A lot of technology that we see relies on a very immediate request/response cycle - when you make
a request to a website, you get a response containing the website you requested, ideally seemingly
immediately. This all relies on the user making the active decision to request that data.
A very different, though very common model is the ‘publish/subscribe’ model, often referred to
as the Pub/Sub model. The AWS Simple Notification Service (SNS) is a super scalable service that
allows users to implement the publish/subscribe model with ease.
This allows us to send texts, emails, push notifications, or other automated messages to other targets
across multiple channels at the same time.
In this chapter, we’ll build a web application that publishes messages to multiple subscribers, using
SNS. There are multiple ways we can go about this. We can send informational messages via email
or SMS, but we can also send messages to other web applications via HTTP.
• Publisher: A service that can broadcast out messages to other services listening (subscribed)
to it.
• Subscriber: Any service that the publisher will broadcast to.
In order to become a subscriber, a service needs to notify the publisher that it wants to receive its
broadcasts, as well as where it wants to receive those broadcasts at - at which point the publisher
will include it in its list of targets when it next publishes a message.
A good metaphor for the publish/subscribe model is any newsletter that you’ve signed up for! In
this instance, you have actively gone to the publisher and told them you want to subscribe as well
as given them your email.
Nothing is done immediately once you’ve subscribed, and you don’t receive any previous issues of
the newsletter.
When the publisher publishes a message (sends out their monthly newsletter) - an email arrives. You
can then choose to do what you will with the email - you could delete it, read it, or even act on some
of the details in it.
6. AWS Simple Notification Service (SNS) 20
This screen has several options, although only displays one group by default - this is the name (which
is mandatory) and the display name, which can be optionally set - this name is used if you publish
to SMS subscribers from the topic.
You’ll also be prompted to specify if you want to use a FIFO or Standard topic type. This cannot
be modified after the topic has been created. If you specifically require the message-order to be
preserved, you’ll want to use the FIFO type. This goes hand-in-hand with AWS SQS, which is covered
in the next chapter as well. However, this somewhat limits the throughput of messages - down to
300 publishes per second.
⁵https://docs.aws.amazon.com/sns/latest/dg/sns-supported-regions-countries.html
6. AWS Simple Notification Service (SNS) 21
This is by no means a small amount, but with really large topics, it can be a factor.
For other purposes, you’d want to use a Standard topic type - which doesn’t necessarily preserve
the order of messages if you send them in batches, but it offers the highest throughput you can get
at that time. It also has support for more subscription protocols, such as SQS, Lambda, HTTP, SMS,
email and even mobile application endpoints.
We’ve chosen the Standard topic type, since we’ll be working with HTTP, SMS and email in this
chapter, as well as SQS in the following chapters.
Some of the other options include:
• Message Encryption: Encrypting messages after being sent by the publisher. This really is only
useful if you’re sending highly sensitive/personal data.
• Access Policies: Defines exactly who can publish messages to the topic.
• Retry Policy: In case a subscriber fails to receive a published message for whatever reason.
• Delivery Status Logging: Allows you to set up a IAM (Identity and Access Management) role
in AWS that writes delivery status logs to AWS CloudWatch.
For now, we’re going to fill in a name and a display name, select the topic type, and then scroll to
the bottom and hit ‘Create topic’. Take note of the ARN ⁶ of the newly created topic, as we’ll need it
later.
⁶https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
6. AWS Simple Notification Service (SNS) 22
Click ‘Users’ on the left, then select ‘Add user’ - you’ll be faced with a screen that looks like this:
Let’s create a user with the name SNSUser, and check the box for programmatic access. We’ll want
to access it through our application programmatically, not only through the AWS console.
This allows anybody with the credentials to access specific parts of AWS via the CLI, or the JavaScript
SDK we’re going to use. We don’t need to give them AWS Management Console access, as we don’t
plan on having those credentials interact with AWS through a browser, like we’re doing now. The
users with this access will be able to access AWS programmatically, and that’s it.
Click ‘Next’, and you’ll be presented with permissions. Click on the ‘Attach existing policies directly’
button and by searching ‘SNS’, you’ll easily be able to find the ‘SNSFullAccess’ option:
6. AWS Simple Notification Service (SNS) 23
IAM users, roles, and policies are all a big topic that is definitely worth investigating - for now
though, this should work for us.
By hitting ‘Next: Tags’ in the bottom right corner, and then ‘Next: Review’ in the same location, you
should see a summary screen that looks something like this:
Note: Make sure you copy the Access Key ID and Secret Access Key or download the .CSV file as this
is the only time you can fetch these credentials - otherwise you’ll need to create new user.
Whilst we’re talking about credentials, again, make sure you do not post these credentials anywhere
online, or commit them to a Git repository. Bad actors will scour GitHub for repositories with
credentials in them so that they can get access to, and use resources on, their AWS account, which
will cost you some sweet money.
Finally, we’re going to set our credentials locally so that our Node application can use them in the
next step.
The default region in AWS is us-east-2 - which is based in Ohio. If you changed it while setting up
your SNS topic, set this to the corresponding region (for example, eu-west-1):
1 touch ~/.aws/credentials
2 echo '[sns_profile]' >> ~/.aws/credentials
3 # The access key ID from the IAM user
4 echo 'aws_access_key_id = <YOUR_ACCESS_KEY_ID_HERE>' >> ~/.aws/credentials
5 # The secret access key from the IAM user
6 echo 'aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY_HERE>' >> ~/.aws/credentials
7 # From the regions page, examples include: us-east-1, us-west-1, eu-west-1, etc.
8 echo 'region = <YOUR_AWS_REGION_HERE>' >> ~/.aws/credentials
1 ~/.aws/config
2 ~/.aws/credentials
1 %USERPROFILE%\.aws\config
2 %USERPROFILE%\.aws\credentials
Demo Application
The application will have a few ways of notifying subscribers and we’ll be starting off with emails.
We’ll bootstrap it with Express and it’ll have two endpoints. The first will be for adding subscribers
to our topic, and the second will be for sending a notification to all of our subscribers.
Note: With the email approach, we’re laying down the foundation for other approaches as well. The
difference between sending SMS, HTTP or email notifications is just a few parameters in the calls
to SNS.
Firstly, let’s create a folder for our project in the terminal, move into the directory, and initialize our
Node app with the default settings:
6. AWS Simple Notification Service (SNS) 25
1 $ mkdir node-sns-app
2 $ cd node-sns-app
3 $ npm init -y
Next, we need to install the Express and AWS-SDK modules so that we can use them both:
1 $ touch index.js
This file will initially contain the boilerplate code required to set up a basic Express app/server, as
well as the code required to connect to AWS. We’ll instantiate a credentials constant, from the
∼/aws/credentials file we’ve created in the previous steps, for the profile we’ve set within it.
This const credentials will be passed to the AWS.SNS constructor, alongside our preferred region.
This in turn gives us an sns instance that we can use to send messages to topics existing on our
account:
For the /status endpoint, we’ve simply returned the JSON contents of our sns instance. Now, we
can go ahead and run the file, just to check if AWS is working correctly with our application:
1 $ node index.js
6. AWS Simple Notification Service (SNS) 26
Visiting localhost:3000/status will print out a big chunk of JSON that has your AWS credentials
in it:
Subscription Endpoints
First, we need to add a POST endpoint for adding subscribers. Below the /status endpoint, we’ll
add the /subscribe endpoint:
If you’re not constrained to using only callbacks, AWS supports the new promise() function, which
can be used to promisify your code very easily.
Instead, we could write:
6. AWS Simple Notification Service (SNS) 27
Okay, let’s walk through this. First, we’re creating a POST endpoint. Inside of that endpoint we’re
creating a params variable ready to hand to our subscribe request off to SNS.
The variable needs a few things:
• Protocol: This could be HTTP/S, EMAIL, SMS, SQS (if you want to use AWS’ queueing service),
or even a Lambda function.
• TopicArn: This is the ARN - a unique identifier for the SNS topic you set up earlier. If you don’t
have it, go and grab it from your Topic in your browser and paste it in the code now.
• Endpoint: The endpoint type depends on the protocol. Because we’re sending emails, we would
give it an email address, but if we were setting up an HTTP/S subscription, we would put a URL
address instead, or a phone number for SMS.
Since we’d like to send emails, we’ll se our parameters like so:
1 let params = {
2 Protocol: 'EMAIL',
3 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
4 Endpoint: req.body.email
5 };
1 let params = {
2 Protocol: 'SMS',
3 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
4 Endpoint: req.body.number
5 };
The subscribe() function accepts these parameters and performs the subscription. It effectively
subscribes an endpoint to an AWS SNS topic. For HTTP and email subscriptions, the owner of the
endpoint must perform the ConfirmSubscription action to actually subscribe itself.
For emails, this is a confirmation email, asking them if they’d like to subscribe. For HTTP, it’s a bit
different, which is covered a bit later in the chapter. For SMS, you don’t need to do this, and this is
a feature we’ll be utilizing in a later chapter.
6. AWS Simple Notification Service (SNS) 28
Publishing Endpoints
Once we can subscribe to a topic, we’ll want to make an endpoint that will actually send out the
notifications to the subscribers:
Just like last time, the parameters depend on which type of endpoint will be receiving the notifi-
cation. These will be a combination of the TopicArn, TargetArn, PhoneNumber, Message, Subject,
MessageStructure and MessageAttributes parameters.
Now, let’s go one-by-one and implement the nuanced subscription and publishing endpoints.
Email Endpoint
Let’s start off with a handler that subscribes an email to the topic:
6. AWS Simple Notification Service (SNS) 29
Once this is in, start your server again. You’ll need to send a request with a JSON body to your
application - you can do this with tools like Postman, or if you prefer you can do it on the CLI:
Note: If you receive a InvalidParameter: Invalid Parameter: TopicArn error, the first thing you
should check is your region. If the Topic is set to a different region than the one in your credentials
file, the parameter will be invalid.
If the endpoint and message are correct, that email address will receive an email asking if you wanted
to confirm your subscription - any subscription created via AWS SNS needs to be confirmed by the
endpoint in some form, otherwise AWS could be used maliciously for spam or DDOS type attacks.
You’ll also be greeted with a JSON response from AWS SNS:
1 {
2 ResponseMetadata: { RequestId: '2c22c687-79b2-5b5e-bea8-9e9d174bd34e' },
3 SubscriptionArn: 'pending confirmation'
4 }
Now, depending on your email service provider, the confirmation email might end up in “Spam” so
make sure to check that inbox as well:
6. AWS Simple Notification Service (SNS) 30
Now, let’s check if the subscription is indeed there on our AWS dashboard:
6. AWS Simple Notification Service (SNS) 31
Again, let’s take a look at what the parameters here are made of:
• Message: This is the message you want to send - in this case, it would be the body of the email.
• Subject: This field is only included because we’re sending an email - this sets the subject of
the email.
• TopicArn: This is the Topic that we’re publishing the message to - this will publish to every
email subscriber for that topic.
You can send a message now using Postman, or curl - so long as we’re passing in our parameters
for the subject and message:
6. AWS Simple Notification Service (SNS) 32
1 {
2 ResponseMetadata: { RequestId: 'd6ec24b4-c6a0-53ae-a687-7caeb78b3a99' },
3 MessageId: '0a6dc748-ae52-5172-a14d-9f52df79d845'
4 }
Once this request is made, all subscribers to the endpoint should receive this email! Congratulations,
you’ve just published your first message using SNS and Node!
SMS Endpoint
If you remove the subject field, you can send 160 character SMS messages to any subscribed phone
number(s). Let’s make a /subscribe-sms handler:
Restart your application, and then make a request to your app with the phone number you want to
subscribe with:
Make sure the number you sent begins with +(your area code). The Node application will return
a lengthy JSON response from SNS for this:
1 IncomingMessage {
2 _readableState: ReadableState {
3 objectMode: false,
4 highWaterMark: 16384,
5 buffer: BufferList { head: null, tail: null, length: 0 },
6 length: 0,
7 pipes: null,
8 pipesCount: 0,
9 flowing: true,
10 ended: true,
11 endEmitted: true,
12 reading: false,
13 sync: false,
14 needReadable: false,
15 emittedReadable: false,
16 ...
And we can see the number in our subscription list, alongside the email:
6. AWS Simple Notification Service (SNS) 34
Note that we didn’t have to confirm this subscription. It’s approved automatically.
Finally, let’s create the endpoint to send an SMS to any subscribed phone(s):
Restart your app, and then send a request to it with a message parameter in the body:
1 {
2 ResponseMetadata: { RequestId: '64ad9304-3a97-5d9e-b5f1-cb1e4ed66546' },
3 MessageId: 'eebc8987-5df5-5e18-8647-d19d29b4f6c0'
4 }
And any subscribed numbers should receive an SMS shortly after! As a point of note, this will
come from a sender called 'NOTICE', and if you’ve used any other services that are using a minimal
configuration of SNS, your message might get thrown into the same text chain:
6. AWS Simple Notification Service (SNS) 35
SMS Message
It’ll typically be bundled with Google verification numbers or confirmations of orders online.
Depending on your phone and provider, the message will also include the Display Name from your
Topic:
6. AWS Simple Notification Service (SNS) 36
Note: SNS doesn’t differentiate between SMS and email subscribers with this setup. Since we have
two subscribers, one using an email and one using a phone number - both subscribers will get the
same message on their respective endpoints.
Since there’s no subject field, the email subscribers will receive a generic AWS Notification Message
subject. Just keep this in mind if you support both SMS and email subscribers for your application.
This issue is addressed in the “Mixed Messaging with MessageStructure” section.
HTTP Endpoint
You could set up a service designed to receive the message, in case you want to trigger a background
action, but don’t necessarily care about immediately receiving the result.
Because we don’t want to send the contents of a GET request as an email, let’s create a new topic and
subscription - we’re going to call this one myHTTPTopic, but otherwise it’s the same flow as before.
The subscription is a little trickier without setting up the application on a server - we’re going to
use ngrok instead. If you’re not familiar with ngrok - it can expose your local application to the
public eye. It’s typically used to test out applications, build webhook integrations or send previews
to clients.
Download ngrok⁷ and unzip it. Then, we’ll want to create a new shell session (keep the current one
open as well, we’ll need that to run our app) and navigate to the directory with ngrok in it.
Run ngrok with:
⁷https://ngrok.com/download
6. AWS Simple Notification Service (SNS) 37
ngrok
That URL is now exposing your port 3000 to the world. To test that this is working, copy the HTTP
endpoint (highlighted in the above screenshot), and paste it into a browser and append /status to
it (for example: http://25fb73b0.ngrok.io/status). This should take you to the status endpoint of your
application.
We will need to leave ngrok running for the rest of this section - if you face any issues, make sure
that ngrok is still running, and that the URL hasn’t changed. If it does, you will need to start from
this point again.
Much like the email, we need to first add and confirm a subscriber. We’ll do this manually for now
so that we get an understanding of each step of the process, but in production you might want to
build services that can subscribe themselves automatically.
First, we need to add the following code to our Express app. This will log a confirmation URL that
we’re expecting to receive:
12
13 if (req.header['x-amz-sns-message-type'] == 'SubscriptionConfirmation') {
14 console.log(payload.SubscribeURL);
15 }
16 });
17 });
This code will take in the request that comes into your application, and check to see if it’s a
SubscriptionConfirmation message from SNS.
Run your app, make sure that ngrok is still up, and copy the ngrok URL. Next, let’s move back to
AWS and create a new subscription, selecting the new topic, and setting the protocol as HTTP. Set
the endpoint to the ngrok URL of your new endpoint - it should resemble the following:
Once you’ve created this, you should receive a ‘confirmation request’ to your running node app, like
the following:
Copy and paste this URL into a browser and your new endpoint will be confirmed. You can check if
this was successful by navigating to the ‘Subscriptions’ page in AWS SNS and checking the ‘Status’
of the subscription:
6. AWS Simple Notification Service (SNS) 39
Next, we’re going to add some logic for dealing with actual notifications sent. Let’s update our
http-subscribe endpoint:
To test this, with ngrok and our app still running, let’s manually publish a message through SNS.
Head to ‘Topics’ in AWS SNS, and click on your new HTTP topic. On the following page, click
‘Publish Message’. On the following page scroll down and enter a message:
6. AWS Simple Notification Service (SNS) 40
Then scroll to the bottom and hit ‘Publish Message’. If you come back to the shell session your app
is running in, you should see something like the following:
We’ve just logged the payload’s message in this example, though you can build pretty complex logic
to react differently to different kinds of notifications.
For instance a Payment notification can kick off a payment processing job, or an UpdateDB notification
could update a database record.
Message Templating
Seeing as your message is a string, you could use string interpolation for dynamic input - for example:
6. AWS Simple Notification Service (SNS) 41
If you have subscribers with differing endpoints - such as SMS and email subscribers, you might
want to send different messages to each.
For example, email messages can have entire templated pages, while SMS subscribers get a link
to a promotion. To achieve this, we’ll use the MessageStructure parameter and change our /send
endpoint a bit:
10 MessageStructure: 'json',
11 Subject: req.body.subject,
12 TopicArn: 'arn:aws:sns:us-east-1:867901910243:myStackAbuseTopic'
13 };
14
15 let promiseResult = sns.publish(params).promise();
16
17 promiseResult.then(data => {
18 console.log(data);
19 }).catch(err => {
20 console.error(err, err.stack);
21 });
22 });
If you want to send messages to multiple endpoints, you’ll include MessageStructure in you
parameter list. It accepts a single type - json. If it’s present, the Message accepts a validly stringified
JSON object.
Here, we’ve defined a message JSON object, that has the default value of SNS Notification. Other
than that, it has an email and sms options - depending on the endpoint type, the values from these
will be used.
For email subscribers, the message is Hello from SNS on email, while SMS subscribers get a Hello
from SNS on SMS. When we rerun the app and hit the endpoint with a POST request:
1 {
2 ResponseMetadata: { RequestId: '39d2fcd3-ce43-50cc-85ad-a4c9b55a7336' },
3 MessageId: 'ce3f9248-f593-5bf4-b4a7-ca528e8a980e'
4 }
And we’ve received different messages, for the same push notification on our topic:
6. AWS Simple Notification Service (SNS) 43
Lambda Endpoint
In a similar vein, you could use these messages to trigger and hand inputs to Lambda functions. This
might kick off a processing job, for example. AWS Lambda is covered in a later chapter.
SQS Endpoint
With the SQS endpoint type, you could put messages into queues to build out event-driven architec-
tures - AWS SQS is covered in the next chapter.
7. AWS Simple Queue Service (SQS)
With the increased complexity of modern software systems, came along the need to break up systems
that had outgrown their initial size. This increase in the complexity of systems made it harder to
maintain, update, and upgrade them.
This paved the way for microservices that allowed massive monolithic systems to be broken down
into smaller services that are loosely coupled but interact to deliver the total functionality of the
initial monolithic solution.
It is in these microservice architectures, Queueing Systems come in handy to facilitate the commu-
nication between the separate services that make up the entire setup.
In this chapter, we will dive into AWS Simple Queue Service (SQS) and demonstrate how we can
leverage its features in a microservice environment.
Imagine that we are a small organization that doesn’t have the bandwidth to handle orders as they
come in. We have one service to receive user’s orders and another that will deliver all orders posted
that day to our email inbox at a certain time of the day for batch processing.
All orders will be stored in the queue until they are collected by our second service and delivered to
our email inbox.
Our microservices will consist of simple Node.js APIs - one that receives the order information from
users and another that sends confirmation emails to the users.
Sending email confirmations asynchronously through the messaging queue will allow our orders
service to continue receiving orders despite the load since it does not have to worry about sending
the emails.
Also, in case the mail service goes down, once brought back up, it will continue dispatching emails
from the queue, therefore, we won’t have to worry about lost orders.
1 $ aws configure
We will get a prompt to fill in our Access Key ID, Secret Access Key, and default regions and
output formats. The last two are optional but we will need the access key and secret.
Note: You can perform this step via the credentials/config files through an IAM user, or by simply
giving access to AWS like in the last two chapters. This is just another option.
With our AWS account up and running, and the AWS CLI configured, we can set up our AWS Simple
Queue Service by navigating to the SQS home page:
¹⁴https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
7. AWS Simple Queue Service (SQS) 47
Here, we’ve specified the queue name, followed by the queue type - nodeshop.fifo. We’ll want
to use a FIFO queue for this purpose, since we want the orders to be processed in the same order
they came in. In the case of a Standard Queue, it’ll try its best to maintain the order, but it’s not
guaranteed.
Here’s a visual representation of the difference between these two queues:
The Standard Queue is better suited for projects that prioritize throughput over the order of events.
A FIFO Queue is better suited for projects that prioritize the order of events.
Once we have chosen the kind of queue we require, let’s specify some options for our queue, such as
the message retention period (how long the messages are kept in the queue before being deleted), the
7. AWS Simple Queue Service (SQS) 48
visibility timeout (time in which the event accessed by a consumer is invisible to other consumers)
or the maximum size:
We’ll leave this options as default, since they’re well-suited for most cases. Other than that, we can
set the access policy, encryption, tags and the dead-letter queue.
The former three are standardly optional and common for AWS services. The dead-letter queue
(DLQ) is a queue that contains the messages from the queue that couldn’t be consumed by a
consumer. Essentially, it can be used to identify which messages from the queue are problematic,
isolate them and modify the code if need be.
By default, this option is Disabled, though you can activate it with the click of a button.
When you create the queue, make note of the queue’s URL:
1 $ mkdir nodeshop_apis
2 $ cd nodeshop_apis
3 $ mkdir orderssvc emailssvc
4 $ npm init -y
We’ve created our project directory, inside of which we have a directory for our Orders Service and
Emails Service, and initialized a Node project with the default settings.
We will build the Orders service first since it is the one that receives the orders from the users and
posts the information onto the queue. Our Emails service will then read from the queue and dispatch
the emails.
Same as last time, we’ll use Express to bootstrap an application. We will also install the body-parser
middleware to handle and validate request data:
Since we will have multiple services that will be running at the same time, we will also install
the npm-run-all package to help us start up all our services at the same time and not have to run
commands in multiple terminal windows:
With npm-run-all installed, let us now tweak the scripts entry in our package.json file to include
the commands to start our services and one command to run them all:
1 {
2 // Truncated for brevity...
3 "scripts": {
4 "start-orders-svc": "node ./orderssvc/index.js 8081",
5 "start-emails-svc": "node ./emailssvc/index.js",
6 "start": "npm-run-all -p -r start-orders-svc"
7 },
8 // ...
9 }
We will add the start-orders-svc and start-emails-svc commands to run our Orders and Emails
services respectively. We will then configure the start command to execute them both using
npm-run-all. Currently, the start command only runs the start-orders-svc script, since that’s
the only one that’ll exist at the time we run it.
The start-emails-svc will be added to this call as soon as it’s created. It’s also worth noting that
we’ve added a command-line argument for the port on the order service script.
With this setup, running all our services will be as easy as executing the following command:
7. AWS Simple Queue Service (SQS) 50
1 $ npm start
1 $ touch index.js
Here, we get the port number from the scripts inside of package.json, and run a standard Express
app. The /index endpoint will respond by simply sending a welcome message.
We’ll start the app by running the npm start command and interact with our APIs using Postman:
7. AWS Simple Queue Service (SQS) 51
1 $ curl localhost:8081/index
We will implement the Emails service later on. For now, our Orders service is set up and we can
now implement our business logic.
Orders Service
The Orders service will receive orders via a route and controller that handles the input. It’ll process
the input and write it to the SQS queue, where the order will be stored until it’s called upon to be
processed at a later date.
Before implementing the controller, let’s install the AWS SDK for this service:
1 // ./orderssvc/index.js
2
3 // Code removed for brevity...
4
5 // Import the AWS SDK
6 const AWS = require('aws-sdk');
7
8 // Configure the region
9 AWS.config.update({region: 'us-east-1'});
10
11 // Create an SQS service object
12 const sqs = new AWS.SQS({apiVersion: '2012-11-05'});
13 const queueUrl = 'SQS_QUEUE_URL';
We’ll be adding a new endpoint after this. Our new /order endpoint will receive a payload that
contains the order data and send it to our SQS queue using the AWS SDK:
7. AWS Simple Queue Service (SQS) 53
The AWS SDK requires us to build a payload object specifying the data we are sending to the queue,
in our case we define it as sqsOrderData. AWS’ MessageAttributes contains a information about
the message’s attributes, in JSON format.
Each message consists of a Name, Type and Value. For example, a userEmail is of String type and
holds the orderData.userEmail value.
7. AWS Simple Queue Service (SQS) 54
Once all message attributes are defined, we stringify the JSON contents and store it as the MessageBody.
This is the body of the message we send off to SQS. Additionally, we set the MessageDeduplicationId,
which is the token used used for deduplication. If a message if sent successfully, the MessageDeduplicationId
is saved. In the next five minutes, you can’t send another message with that same MessageDeduplicationId.
Here, we’ve set the id as the user email.
Once we’ve added all the message attributes we need for our application to process the order, we
can go ahead and send the message to the queue:
The sendMessage() function will send our message to the queue using the credentials we used to
configure the AWS CLI (or another configuration approach). Finally, we wait for the response and
notify the user that their order has been received successfully and that they should check for the
email confirmation.
To test the Orders service, we run npm start and send the following payload to localhost:8081/order:
1 {
2 "itemName": "Phone case",
3 "itemPrice": "10",
4 "userEmail": "[email protected]",
5 "itemsQuantity": "2"
6 }
7. AWS Simple Queue Service (SQS) 55
1 Thank you for your order. Check you inbox for the confirmation email.
It looks like our order has been processed successfully. Let’s take a look at the SQS dashboard:
7. AWS Simple Queue Service (SQS) 56
Here, you can see the information we’ve provided regarding the order, as well as some additional
data about the message itself. Our Orders service has been able to receive a user’s order and
successfully send the data to our SQS queue.
Email Service
Our Orders service is ready and already receiving orders from users. The Emails service is respon-
sible for reading the messages stored in the queue and dispatching confirmation emails to the users.
This service is not notified when orders are placed and therefore has to keep checking the queue for
any new orders.
To ensure that our Emails service is continually checking for new orders we will use the sqs-consumer
library that will continually and periodically check for new orders and dispatch the emails to the
users. sqs-consumer will also delete the messages from the queue once it has successfully read them
from the queue.
Note: If a consumer doesn’t consume a message from SQS within the “holding period”, or rather, the
message retention time, it’ll be deleted.
We’ll start by installing the sqs-consumer library:
7. AWS Simple Queue Service (SQS) 58
To send emails, we’ll use the nodemailer library, which also has to be installed:
Then, let’s create a new index.js file for this service in the emailssvc directory:
1 $ touch index.js
And within this file, let’s first set up the AWS SDK and require the sqs-consumer:
You can use any email provider for your service. For example, we’ll be using Gmail. You’ll need to
provide the email address and password of the account you’re sending from in the auth object:
Depending on your provider, you might have to explicitly allow this app to log in. For example, in
Gmail you must set the option “Allow less secure apps” to “On”, otherwise, the login will be blocked.
With nodemailer ready to go, let’s define a sendMail() function:
7. AWS Simple Queue Service (SQS) 59
1 function sendMail(message) {
2 let sqsMessage = JSON.parse(message.Body);
3 const emailMessage = {
4 from: 'sender_email_adress', // Sender address
5 to: sqsMessage.userEmail, // Recipient address
6 subject: 'Order Received | NodeShop', // Subject line
7 html: `<p>Hi ${sqsMessage.userEmail}.</p. <p>Your order of ${sqsMessage.item\
8 sQuantity} ${sqsMessage.itemName} has been received and is being processed.</p> <p> \
9 Thank you for shopping with us! </p>` // Plain text body
10 };
11
12 return new Promise((resolve, reject) => {
13 transport.sendMail(emailMessage, (err, info) => {
14 if (err) {
15 console.log(`EmailsSvc | ERROR: ${err}`);
16 return reject(err);
17 } else {
18 console.log(`EmailsSvc | INFO: ${info.response}`);
19 return resolve(info);
20 }
21 });
22 });
23 }
The sendMail() function starts off by accepting the message from the queue. It parses the JSON
contents of the message’s Body. This is the same message we saw in the SQS dashboard a bit back.
Then, we’ll construct our own emailMessage for the user.
Here, you can put any email address you’d like to send from, such as [email protected], or any
other address you have access to. We’ll be sending the email to the userEmail, extracted from the
sqsMessage. Finally, we set the subject and html body of the email, which contains information
about the order.
Using nodemailer’s transport, we then send the email to our user/customer. Now that we can use
an SQS message and send an email to the user it’s tied to, let’s use the sqs-consumer to read from
the queue, and invoke this function on each message retrieved from the queue.
We’ll create a Consumer instance, using the queueUrl. It accepts the handleMessage function, which
is called whenever a message is received:
7. AWS Simple Queue Service (SQS) 60
We’ve created a new sqs-consumer application by using the Consumer.create() function and
provided the query URL and the function to handle the messages fetched from the SQS queue. When
a message is received, we’ve decided to call sendMail() on that message.
Here, you can also specify the batchSize, which defines in which batches should the messages be
handled in. By default, each message will get handled as it arrives. If you set a batchSize, a certain
amount of them will have to accumulate before they get processed.
Our Emails service is now ready. To integrate it into our execution script, we will simply modify
the scripts option in our package.json:
1 {
2 // Truncated for brevity...
3 "scripts": {
4 "start-orders-svc": "node ./orderssvc/index.js 8081",
5 "start-emails-svc": "node ./emailssvc/index.js",
6 // Update this line
7 "start": "npm-run-all -p -r start-orders-svc start-emails-svc"
8 },
9 // ...
10 }
1 $ npm start
1 {
2 "itemName": "Phone case",
3 "itemPrice": "5",
4 "userEmail": "[email protected]",
5 "itemsQuantity": "3"
6 }
Followed by:
This means that our Emails service picked up the order from the queue, sent out the email and got
a 2.0.0 OK response from the email service provider.
If we check the inbox of the user we’ve sent in the payload, we’ll see the confirmation email:
7. AWS Simple Queue Service (SQS) 62
8. Pairing SNS and SQS Together
In this chapter, we’ll reflect on the past two chapters and build an application that pairs together
AWS SNS and AWS SQS to process orders from an online shop.
Demo Application
For the demo project, we will enhance the Node Shop project that we built in the previous chapter
and instead use SNS to send the notification emails to users.
In the existing project, we set an SQS queue that received an order’s details via the Orders service
and sent our emails via the Emails service which picked up the orders from the queue that we setup
on SQS. In that case, we had to implement our own email messaging functionality to notify users
that we received their orders. This can be replaced with SNS, as it naturally pairs with SQS and
replaces the need for us to implement our own email handling system.
An added advantage is that we can allow our users to use their phone numbers instead of emails
when placing orders and we won’t have to implement a separate service to handle SMS delivery. Also,
depending on which type of application you’re running, you might prefer using phone numbers as
a sort of identification.
Also, remember from the SNS chapter - you don’t have to subscribe a phone number in advance,
like emails, and you can target individual phone numbers to send messages to.
Implementation
We will start by setting up a NodeShopTopic on SNS, just as we set up the topic in the SNS chapter:
8. Pairing SNS and SQS Together 64
Our previous project had an Orders service that received the details of a user’s order and an Emails
service that picked these details off the queue and dispatched the emails to the users.
Since we’ll be working with phone numbers, instead of emails, let’s update the /order endpoint.
We’ll want to extract the userPhone from the request, and pack it into a variable, amongst other
order data:
1 let sqsOrderData = {
2 MessageAttributes: {
3 // `userPhone` as a MessageAttribute
4 'userPhone': {
5 DataType: 'String',
6 StringValue: orderData.userPhone
7 },
8 'itemName': {
9 DataType: 'String',
10 StringValue: orderData.itemName
11 },
12 'itemPrice': {
13 DataType: 'Number',
14 StringValue: orderData.itemPrice
15 },
16 'itemsQuantity': {
17 DataType: 'Number',
18 StringValue: orderData.itemsQuantity
19 }
20 },
21 MessageBody: JSON.stringify(orderData),
22 // Changed from `userEmail` to `userPhone`
23 MessageDeduplicationId: req.body['userPhone'],
24 MessageGroupId: 'UserOrders',
25 QueueUrl: queueUrl
26 };
Finally, we’ll call the sendMessage() function on the sqs object, with this sqsOrderData:
Much like in the previous chapter, we’ve used the phone number to identify the messages in the
8. Pairing SNS and SQS Together 66
queue. We’ll also use this same phone number to send a confirmation SMS message using SNS.
To this end, we’ll have an SMS service which accepts a request, and simply passes it on to the SNS
topic we’ve recently created. It will contain an sqs-consumer which is used to access the queue from
SQS. Once a new order is received and passed along, the SMS service quickly publishes it to the
NodeShopTopic.
If you’re working in a new project, you’ll have to install the dependencies from the previous chapter
again. In any case, we’ll create a new file to host our code for the SMS service. Let’s start off with
importing the required packages:
Then, we’ll define the main function of this service - the sendSMS() function:
It accepts a message from SQS and parses the JSON contents of it. We construct an update message
for the user and set up a couple of parameters that we then fed into the sns instance, publishing the
message.
8. Pairing SNS and SQS Together 67
Now, this just handles what happens when the SQS message arrives. To receive a message from SQS,
we’ll use the sqs-consumer instance to consume information from the queue. This message can then
be passed on to our sendSMS() function:
This is done in batches of 10. The only thing left is to add a couple of event handlers if any exceptions
arise, and start the application up:
If we had used email addresses instead, we would first have to add the emails as subscribers to the
SNS topic. All subscribers get the same message, once a topic publishes it. This means that on each
purchase, all previous customers would get the order information as well.
Using SMS, we can target a single individual. Let’s update our scripts from the package.json file to
include this new SMS service:
1 "scripts": {
2 "start-orders-svc": "node ./orderssvc/index.js 8081",
3 "start-sms-svc": "node ./smssvc/index.js",
4 "start": "npm-run-all -p -r start-orders-svc start-sms-svc"
5 }
1 $ npm start
1 {
2 "itemName": "Phone cases",
3 "itemPrice": "10",
4 "userPhone": "+2547...",
5 "itemsQuantity": "2"
6 }
The phone number requires us to put a + sign and the country code in the payload. Let’s take a look
at the terminal to check how things have been going:
8. Pairing SNS and SQS Together 69
1 $ npm start
2
3 > [email protected] start
4 > npm-run-all -p -r start-orders-svc start-sms-svc
5
6 > [email protected] start-orders-svc
7 > node ./orderssvc/index.js 8081
8
9 > [email protected] start-sms-svc
10 > node ./smssvc/index.js
11
12 Orders service listening on port 8081
13 SMS service is running
14
15 # Order has been submitted successfully
16 OrdersSvc | SUCCESS: e77434ea-6452-4078-93e2-a0d7eed18631
17 # Message has been sent and accepted by SNS
18 Message sent: 4a278bba-dda3-57a9-9f2e-d3a1a2ea6585
Here, both services are run at the start, due to the updated scripts in the package.json file. Once
the order has been placed, in our case, via Postman - it’s been sent off to the SQS queue. Our SMS
service reads from this queue, and sends a message to the SNS topic, informing the user of their
purchase via SMS:
8. Pairing SNS and SQS Together 70
9. Database Support
Nowadays, data is enjoying more value than it ever did in the past. This trend will likely continue
into the future. In fact, “enjoying more value” is an understatement.
Needless to say, AWS offers a fair bit of database support. They offer relational, key-value, in-
memory, document, wide column, graph, time series and ledger databases. Each of these have their
own use-cases and applications.
Though, relational databases are still the most prevalent type in the world. They’re used for tradi-
tional applications, enterprise resource planning (ERP), customer relationship management (CRM),
e-commerce applications, etc.
For your relational database needs, Amazon offers:
• Amazon Aurora
• Amazon RDS
• Amazon Redshift
Amazon Aurora is a database engine supporting MySQL and PostgreSQL. While Amazon RDS
supports a wider variety - MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, if you’re
using RDS with Aurora, you’ll be limited to MySQL and PostgreSQL.
Aurora and RDS are optimized to work together, and this is a really common combination.
The benefits of using a cloud database service over hosting a database on your own server mostly
counts the same benefits as services such as file hosting.
It’s easier to scale cloud databases by simply adding more nodes and thus scaling either vertically or
horizontally. Storing more data boils down to paying more for the new data you introduce, without
a step-like investment plan. You pay as you go.
Security is also another huge benefit. When you host the data - you take care of that data. You
implement the security measures. This means that instead of implementing the functionalities of
your application - you spend time finding ways to protect your application from potential attackers
and threats:
9. Database Support 72
Any transfer of data between different services is prone to attacks. Cross-Site Scripting and Man in
The Middle attacks can originate from the end-user. SQL Injections can occur while your application
is exchanging information with the database, and these attacks originate from the end user. Without
proper processing, you can give your users unwanted access or privilege abuse over your database.
And a really common attack is a classic Denial of Service attack, that doesn’t really even require
you to make a flaw in the security protocols - it’s enough to simply overload the capabilities of the
database, or the entire server.
With a cloud database, you leave these issues to your service provider to deal with. Any reputable
service provider will have already handled the vast majority of security issues and concerns that
you now don’t have to.
You can rest assured that the data is secure and protected as huge players such as AWS can’t afford
to not have the world’s best security for their storage:
9. Database Support 73
Of course, this doesn’t mean that you can have a hands-off approach to security. User-originating
attacks can still make their way into certain parts of your application if you’re not too careful. What
this does mean is that you won’t have to take care of most of these attacks, at the pain points where
they can cause the most damage.
10. AWS Relational Database Service
(RDS)
It’s not an overstatement to say that information and data runs the world. Almost any application,
from social media and e-commerce websites to simple time trackers and drawing apps, relies on the
very basic and fundamental task of storing and retrieving data in order to run as expected.
Amazon’s Relational Database Service¹⁵ (RDS) provides an easy way to get a database set up in
the cloud using any of a wide range of relational database technologies. It also introduces easy
administration through their management console and fast performance.
In this section, we’re going to set up a database on RDS, and store data on it with a Node application.
setting_up_rds_instance
On the menu on the left, select ‘Databases’. This would normally display a list of RDS instance
clusters that we’ve created, but we don’t have any yet.
To create one, click the orange ‘Create database’ button. You should be presented with a page that
looks like:
¹⁵https://aws.amazon.com/rds/
10. AWS Relational Database Service (RDS) 75
create_rds_database
AWS has recently introduced an ‘Easy create’ method for creating new RDS instances, so let’s use
that.
Under ‘Engine type’ we’ll use ‘Amazon Aurora’, which is Amazon’s own database engine optimized
for RDS. For the edition, we’ll leave this set to ‘Amazon Aurora with MySQL 5.6 compatibility’.
Under ‘DB instance size’ select the ‘Dev/Test’ option - this is a less powerful (and cheaper) instance
type, but is still more than enough for what we need it for.
The ‘DB cluster identifier’ is the name of the database cluster that we’re creating. Let’s call ours
my-node-database for now.
For the master username, leave it as admin. Finally, we have the option to have a master password
generated automatically. For the ease of this tutorial, let’s set our own.
Make sure it’s secure as this is the master username and password!
10. AWS Relational Database Service (RDS) 76
Finally, scroll down and click ‘Create database’. It takes a few minutes to fully provision an RDS
instance:
create_rds_database
Before getting started on our Node application, we need to make sure we can connect to the instance.
Select the instance you just created (it’ll be the option that ends in instance-1) and take a note of
the value under ‘Endpoint’:
On the right-hand side, under ‘VPC security groups’, click the link - this will take you to the security
group that’s been set up for the database. Security groups are essentially firewall rules as to who is
and isn’t allowed to make connections to a resource.
Currently, this one is set to only allow connections from resources that have the same security group.
Selecting the ‘Actions’ drop down at the top, navigate to ‘Edit inbound rules’. In this dialog, click
‘Add rule’. For the new rule, under ‘Type’, select ‘All traffic’. Under ‘Source’, select ‘Anywhere’.
You should end up with something that looks like this:
10. AWS Relational Database Service (RDS) 77
Add another rule, that allows traffic from Anywhere (0.0.0.0/0 meaning anywhere).
This dashboard is actually the EC2 dashboard, as RDS is built on top of EC2. You’re basically editing
the inbound rules for the entire instance, not just RDS. This isn’t too relevant for now, and is covered
in the chapter on EC2.
NOTE: Even though you’ve set the inbound and outbound rules, the RDS instance most
likely still isn’t public.
To make it public, navigate to the Connectivity and Security panel, and check if the Public Accessi-
bility is turned on:
If not, select Modify at the top, and under Connectivity -> Additional connectivity configuration,
turn it on:
10. AWS Relational Database Service (RDS) 78
Your RDS instance should now be ready to go! Let’s write some code to interact with it.
Demo Application
In order to interact with our database, we’re going to create a an API that allows us to store user
profiles via Express. Before we do that, we need to create a table inside our RDS instance to store
data in.
Let’s create a folder, move into it and initialize a blank Node.js application with the default
configuration:
1 $ mkdir node-rds
2 $ cd node-rds
3 $ npm init -y
And finally, we want to create two JavaScript files - one of them will be our Express app, the other
will be a single-use script to create a table in our database:
1 $ touch index.js
2 $ touch dbseed.js
Make sure to swap out <DB_ENDPOINT> for the endpoint that we noted down earlier, and fill in the
password. What this piece of code will do is attempt to connect to the database - if it succeeds, it’ll
run an anonymous function that logs ‘Connected!’, and then immediately close the connection.
We can quickly check to see if it’s properly set up by running:
1 $ node dbseed.js
1 Connected!
If the message wasn’t returned, there’s likely an issue with the security settings - go back to the RDS
set-up and make sure you’ve done everything correctly.
Now that we know that we can definitely connect to our database, we’ll want to create a table. Let’s
modify our anonymous function:
10. AWS Relational Database Service (RDS) 80
1 con.connect(function(err) {
2 if (err) throw err;
3 con.query('CREATE DATABASE IF NOT EXISTS main;');
4 con.query('USE main;');
5 con.query('CREATE TABLE IF NOT EXISTS users(id int NOT NULL AUTO_INCREMENT, user\
6 name varchar(30), email varchar(255), age int, PRIMARY KEY(id));', function(error, r\
7 esult, fields) {
8 console.log(result);
9 });
10 con.end();
11 });
1 OkPacket {
2 fieldCount: 0,
3 affectedRows: 0,
4 insertId: 0,
5 serverStatus: 2,
6 warningCount: 0,
7 message: '',
8 protocol41: true,
9 changedRows: 0
10 }
Now that we’ve got a table to work with, let’s set up the Express app to insert and retrieve data from
our database.
Create/Insert Endpoint
Let’s set up the boilerplate code for our Express app and define a POST request handler that we’ll use
to create users, based on the information from the request:
10 user: 'admin',
11 password: '<DB_PASSWORD>'
12 });
13
14 app.post('/users', (req, res) => {
15 if (req.query.username && req.query.email && req.query.age) {
16 console.log('Request received');
17 con.connect(function(err) {
18 con.query(`INSERT INTO main.users (username, email, age) VALUES ('${req.\
19 query.username}', '${req.query.email}', '${req.query.age}')`, function(err, result, \
20 fields) {
21 if (err) res.send(err);
22 if (result) res.send({username: req.query.username, email: req.query\
23 .email, age: req.query.age});
24 if (fields) console.log(fields);
25 });
26 });
27 } else {
28 console.log('Missing a parameter');
29 }
30 });
31
32 app.listen(port, () => console.log(`RDS App listening on port ${port}!`));
Here, we set up the Express app to run on port 3000, and have a single request handler. It parses
the request, and extracts information such as the username, email and age from it. We then use the
MySQL connection to create a query and insert this info into the database.
If all is well, the results are sent back in the response for validation purposes.
Let’s check if this works. Start up the application:
1 $ node index.js
And then, let’s fire a POST request to our server, creating a user, using Postman:
10. AWS Relational Database Service (RDS) 82
postman_request_to_rds
1 Request received
1 {
2 "username": "testing",
3 "email": "[email protected]",
4 "age": "25"
5 }
Since we’ve received the same data back, our function is working fine. With the functionality of
adding users complete, let’s go ahead and retrieve users from the database.
Let’s devise a simple GET endpoint for a more user-friendly way to check for results. Below the POST
request handler, let’s make another handler:
In your browser, navigate to localhost:3000/users and you should be presented with all the
inserted users:
Alternatively, you can send a GET request to localhost:3000/users and you’ll be able to see this
output:
1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]
We can narrow this down to only get a specific user if we have a field that we know is unique within
the database. For example, if our username was our unique key, we could do this:
10. AWS Relational Database Service (RDS) 84
The :username part of the above code is called a parameter, and acts like a variable that we can throw
in, allowing us to retrieve whatever the user entered there. This allows us to make a GET request to
localhost:3000/users/testing, which would just return records with that username:
1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]
The output is the same as getting all users, since there’s only one user in the database. You’ll also
typically have a unique id for each user. RDS automatically adds a unique id field to each entry. It
automatically increments as well.
You can search for an entry by id using the exact same approach as for the username:
Sending a GET request to localhost:3000/users/1 or by visiting that URL in a browser, we’re greeted
with the all-too familiar user:
10. AWS Relational Database Service (RDS) 85
1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]
Finally, we might only want to retrieve a specific bit of information about a user - we could make
an endpoint that deals with this as follows:
1 [
2 {
3 "age": 25
4 }
5 ]
Here, we’re still taking in the parameter, but we’re adding another part to our endpoint to specify
the data we want to collect (and then coding that into our SQL query).
What if we want to update that single user’s details? Let’s say our user wanted to change their email
- we’ll need another endpoint to perform that update. Let’s create a new endpoint, which accepts a
:username path variable, and extracts the new email from the request we send to it:
10. AWS Relational Database Service (RDS) 86
You’ll notice our second parameter in con.query is now an array - this array is being read in order
into each of the ? placeholders in the query itself - make sure you get the order right, otherwise
you’ll end up with weird results, at best.
Using the parameter from before, we can make a PUT request to the same user, but with a different
query. Let’s make a PUT request using Postman, and pass in our path parameter email:
Ideally, we don’t want to change too many fields at the same time, as this can lead to convoluted
queries and can increase the chances of making a mistake. When in doubt, refer to the S in SOLID
principles - single responsibility.
Finally, our user might decide that they no longer want to be part of our app. As sad as this is, we’ll
need a way to delete any information about them. We can do this with the following snippet:
Again, re-run the app and return to Postman, this time making the method for the request DELETE
to localhost:3000/users/testing:
10. AWS Relational Database Service (RDS) 88
You can check if this worked by sending a GET request to localhost:3000/users, which will return
an empty array if you haven’t added any other users in earlier stages:
10. AWS Relational Database Service (RDS) 89
Note: If you’re not planning on continuing to use your RDS instance at this moment,
make sure to terminate it! Otherwise, you’ll rack up a hefty bill within a month.
11. Cloud Computing
Cloud Computing has taken the world by storm. It introduced developers and businesses with
the possibility of offloading certain resource-heavy operations onto exceptionally powerful servers,
designed and built just for that purpose.
Storage and computing power, traditionally, was really expensive. The equipment itself is expensive,
even without the workforce that’s required to maintain it. This led to enterprises investing obscene
amounts of money into equipment and workforce, and small businesses were left without much
solutions.
This was a huge hinderance for smaller teams that didn’t have the financial stability to invest, and it
was also a hassle for those who could invest since server rooms are big and require proper ventilation
and maintenance.
Naturally, everyone was looking into ways they can delegate this, and those with high computing
power and huge storage started offering their power to others - for a price.
In 2006., Amazon accelerated the advent of Cloud Computing by introducing the world to the Elastic
Compute Cloud (EC2).
There are three main Cloud Computing Models:
IaaS is the most abstract to developers, but it’s also the most “physical” model. It takes care of
the underlying infrastructure typically done by network administrators. PaaS builds upon IaaS and
is familiar to developers. These are typically frameworks and engines used to build applications.
Finally, SaaS is the “end-stage”, familiar to end-users. These are deployed and working applications
on the web.
11. Cloud Computing 91
The services Amazon offers fall into the realm of IaaS and PaaS. Both of these are required in
order to build a SaaS - a service that someone will present to a user. With this, they’ve positioned
themselves as an advanced platform, aimed at developers, that allows them to build powerful, fast,
secure software.
Infrastructure is the foundation of any system. The way your network works, how the computers
within it communicate, the operating systems on them and storage - all of this is fundamental.
Again, maintaining these takes time and resources, and these can be offloaded onto other parties
which are glad to give up their own computing power for these purposes, for some money.
Many IaaS providers are currently competing on the market:
• AWS
• DigitalOcean
• Microsoft Azure
• Google Compute Engine
• IBM SmartCloud Enterprise
• Apache CloudStack
These are delivered as on-demand resources for running networks and the underlying infras-
tructure. Typically, these providers will take care of application runtime, operating systems, data
storage/servers and virtualization.
This means that you don’t need the hardware or workforce for any of this. You pay as you go and
use the resources, saving you time, money and many headaches.
11. Cloud Computing 92
Platforms as a Service (PaaS) are familiar to developers. They’re typically provided as frameworks
used to build software upon. These typically include database and server support, operating systems
and language execution environments.
The players competing in this space are pretty similar to the players in the IaaS space:
• AWS
• Microsoft Azure
• Google Cloud App Engine
• IBM App Connect
• Oracle Cloud
• Heroku
• Apache Stratos
Using a PaaS instead of making things from scratch will cut coding time, make cross-platform
support easier and help manage the development cycle of an application.
Finally, Software as a Service (SaaS), is what the end-user sees. These are finished, deployed/hosted
applications available on the web.
These are things such as Google Apps, Dropbox, MailChimp, Slack, HubSpot, etc. Some of these
services also offer an API that you can send requests to and connect to your custom-built application.
When working with an on-demand SaaS service, the third party you’re paying is managing every-
thing - the infrastructure, the platform and the software itself.
Imagine GitHub. You’re using their software solution to host code, using a popular tool - Git. If
you have a pro account, you also pay a monthly membership for that service. They manage the
infrastructure, the platforms used to build the software and all maintenance.
12. AWS Elastic Compute Cloud (EC2)
The AWS Elastic Compute Cloud (EC2) is an IaaS type-offering from Amazon. It provides scalable
computing power, which you can really use in any way you’d like. Typically, people set up virtual
servers and storage and deploy applications to EC2 for quick, easy and cheap app provisioning.
EC2 is a core part of AWS, and a lot of AWS’ other services are built on top of it.
It works by providing computing environments, known as instances. These instances run Amazon
Machine Images (AMIs), and with them, you’ve got most of the things required to run a web
application preconfigured.
In this chapter, we’re going to create a Node.js app with Docker, start and configure an EC2 instance,
and deploy our app to it. By the end of the chapter, you’ll have your Node app running on AWS, and
a better understanding of how to interact with a core AWS service.
Demo Application
Let’s make a simple Node application that responds to a request. Let’s make a directory for the app,
move into it and initialize it with the default configurations:
1 $ mkdir node-ec2
2 $ cd node-ec2
3 $ npm init -y
Once the package.json file is created, open it up and add the following line to the beginning of the
scripts section:
Instead of running node index.js, we’ll be using npm start, which will run everything in our script.
To serve our requests, we’re going to be using Express, as usual:
1 {
2 "name": "app",
3 "version": "1.0.0",
4 "description": "",
5 "main": "index.js",
6 "scripts": {
7 "start": "node index.js"
8 },
9 "author": "",
10 "license": "ISC",
11 "dependencies": {
12 "express": "^4.17.1"
13 }
14 }
1 $ touch index.js
Within it, we’ll set up Express and make a single request handler:
This app will start on port 3000, and will serve an endpoint at /status. We can verify this works by
running:
1 $ npm start
2 Example app listening on port 3000!
1 $ curl 'localhost:3000/status'
Will return:
12. AWS Elastic Compute Cloud (EC2) 95
With our simple Node application ready, let’s turn it into a Docker image which we’ll deploy to EC2.
We’ll first publish this image to Docker Hub, and while setting up an EC2 instance, we’ll read from
this image for the deployment.
Create a new file in the same directory as your Node application, called Dockerfile, in which we’ll
define set up a few instructions:
1 FROM node:13-alpine
2
3 WORKDIR /usr/src/app
4
5 COPY package*.json ./
6
7 RUN npm install
8
9 COPY . .
10
11 EXPOSE 3000
12 CMD [ "npm", "start" ]
This is a basic Dockerfile that can be used for most simple Node applications. Next, let’s build the
Docker image and then run it to verify it’s working correctly:
If you navigate to http://localhost:3000/status again, you should see the exact same output:
Since it’s working, let’s push our Docker image to Docker Hub¹⁶:
Setting up EC2
With our application “dockerized”, we need to set up an EC2 instance for it to run on. Head to AWS
and log in.
Click the ‘Services’ dropdown menu at the top of the page, and search for ‘EC2’. This will lead you
to the EC2 Dashboard:
This is the page with the summary of our current instances. Obviously, there are 0 running so far,
with 0 dedicated hosts, key pairs, etc… This view also gives us peek at the service’s health and if
everything is running as it should be, over different zones.
Select the ‘Instances’ link on the left. Here is where we’ll be setting up the aforementioned instance
for our application:
On the next view, click the ‘Launch Instance’ button. You’ll see a page that looks like this:
12. AWS Elastic Compute Cloud (EC2) 97
AMIs
This is where we select the Amazon Machine Image - or AMI for short. An AMI is an ‘out of the
box’ server, and can come with multiple configurations.
For instance, we could select one of the Quick Start AMIs that have Amazon Linux 2¹⁷ on them, or
if you scroll down, there are instances with Ubuntu running on them, etc.
Each AMI is a frozen image of a machine with an operating system and potentially some extra
software installed.
To make things easy, we can look for an EC2 instance with Docker already configured for us!
To do this, we’ll go to the ‘AWS Marketplace’ on the left. Searching for ‘ECS’ should yield us the
‘ECS Optimized Amazon Linux 2 Image’.
This image comes with Docker, and is optimized for running containers. Hit ‘Select’ on the chosen
image and we’ll continue to the next page:
¹⁷https://aws.amazon.com/amazon-linux-2/
12. AWS Elastic Compute Cloud (EC2) 98
Instance Types
On the next view, we select what type of instance we want. Generally, this dictates the resources
available to the server that we’re starting up, with scaling costs for more performant machines:
The t2.micro instance type is eligible for the free (demo) tier, so it’s recommended to use that when
you’re just getting started:
Select the appropriate checkbox, and then click ‘Review and Launch’ in the bottom right corner. This
leads you to the “Review” page, where you can take a look at the selected options so far:
12. AWS Elastic Compute Cloud (EC2) 99
If all looks good, click ‘Launch’, and you’ll get a popup to select or create a key-pair.
Like with other services, we’ll use this key pair to connect our application to the service. Select the
first drop-down, and select ‘Create a new key pair’. Under ‘Key pair name’, enter what you’d like
to call it:
12. AWS Elastic Compute Cloud (EC2) 100
Make sure to ‘Download the Key Pair’ on the right hand side as a .pem file. By selecting ‘Launch
Instance’ again, your EC2 instance should get started up:
Security Groups
Before we try running our application, we need to make sure that we’ll be able to access the
instance and the app. Most AWS resources operate under ‘Security Groups’ - these groups dictate
how resources can be accessed, on what port, and from which IP addresses.
In the previous chapter on RDS, while editing the public accessibility, we’ve tweaked the security
groups of that instance. Specifically, RDS is tied to EC2. You might’ve already noticed that while
editing the inbound rules, you were redirected to the EC2 dashboard.
Now, we’ll be editing the inbound rules again, for this EC2 instance. First, go to your instance
dashboard:
Enter your instance, where you can see information about it:
Then, under the “Security” tab, you’ll be able to see the “Security Group”:
12. AWS Elastic Compute Cloud (EC2) 102
Clicking on this group will lead you to the dashboard where you can modify it. Select “Edit Inbound
Rules”. This is the exact same prompt we had for RDS. This time around, we’ll allow our app running
on port 3000 to access it:
What this means is that traffic that comes in through port 22, using the TCP protocol, is allowed
from anywhere (0.0.0.0/0 meaning anywhere). We need to add another rule to allow anybody to
access our app at port 3000.
12. AWS Elastic Compute Cloud (EC2) 103
Head back to the ‘Instances’ page (click the link on the left) and select the instance you created
earlier. The address for your EC2 instance is located under the “Public IPv4 DNS” at the top of the
page:
Head back to the terminal, and navigate to the folder where the key-pair you downloaded earlier is
located. It will be named as whatever you entered for the key-pair name, with a .pem as its extension.
Let’s change the key’s permissions and then SSH into the EC2 instance:
Now, we can run commands on the EC2 instance. From here, we just need to launch our app from
Docker Hub. Since the EC2 instance is already configured to have Docker installed and ready, all
we need to do is:
You’ll be able to reach the instance using the same address you used to SSH into the instance. Simply
navigate in your browser to:
1 <PUBLIC_DNS>:3000/status
Your app should return the status endpoint to you that we saw earlier. Congratulations, you’ve just
run your first app on EC2!
A quick win, however the trick is to run the app “headless”. As of now, your app is running in your
current shell session - and as soon as you close that session, the app will terminate!
To start the app in a way that it’ll keep running in the background, run the app with the additional
-d flag:
12. AWS Elastic Compute Cloud (EC2) 104
Now, you can close the terminal and it’ll continue running.
Security
You might want to go back and tighten up the security on the instance/experiment with different
configurations - such as configuring it so that only we can access the SSH port, for example.
Change the ‘Source’ field on the first rule to ‘My IP’ - AWS will automatically figure out where
you’re accessing it from.
Note: If you’re running through this chapter on the move, or come back to it later, your computer
might have a different IP than when you initially set ‘My IP’. If you encounter any difficulties later
on, make sure to come back here and select ‘My IP’ again.
Other AMIs
There are hundreds of different AMIs, a lot from various communities, with applications already
pre-installed - it’s worth taking a look through to see if there’s an easy way to set up something
you’ve wanted to work with.
Let’s create a project directory, move into it and start a default Node project:
1 $ mkdir ec2-node
2 $ cd ec2-node
3 $ npm init -y
Then, we’ll install the aws-sdk required for interacting with our EC2 instance:
12. AWS Elastic Compute Cloud (EC2) 105
1 $ touch index.js
And within it, let’s import the AWS SDK and initialize an EC2 instance:
In the credentials file, we’ve added another user profile - [ec2_user] for which we’ve created a
new IAM user with the AmazonEC2FullAccess policy, just like we did for SNS and SQS.
As usual, we’ll use this ec2 instance to send requests to AWS. Now, let’s define the parameters
required to create an instance.
Thinking back of the things we set up on the dashboard manually, we’ll put them in now as well:
1 let instanceParams = {
2 ImageId: 'ami-0669eafef622afea1',
3 InstanceType: 't2.micro',
4 KeyName: 'ec2-keypair',
5 MinCount: 1,
6 MaxCount: 1
7 }
The ImageId refers to the ID of the AMI you’d like to use. This is the ID of the same AMI we used
in the manual section. You can get the ID of an AMI through the AWS dashboard.
Since t2.micro is eligible for the free tier, we’ve also used it here for the InstanceType. Then, we
provide the ec2-keypair.pem file’s location to the KeyName parameter, followed by a MinCount and
MaxCount. This defines how many instances we want the EC2 client to create - at least MinCount,
and at most MaxCount. In this case, it’s just 1.
Now, let’s use the ec2 instance with these parameters to initiate a request to create an EC2 instance:
12. AWS Elastic Compute Cloud (EC2) 106
1 {
2 Groups: [],
3 Instances: [
4 {
5 AmiLaunchIndex: 0,
6 ImageId: 'ami-0669eafef622afea1',
7 InstanceId: 'i-0609b562cd4c8d9ea',
8 InstanceType: 't2.micro',
9 KeyName: 'ec2-keypair',
10 LaunchTime: 2020-11-04T02:42:52.000Z,
11 Monitoring: [Object],
12 Placement: [Object],
13 PrivateDnsName: 'ip-172-31-21-216.ec2.internal',
14 PrivateIpAddress: '172.31.21.216',
15 ProductCodes: [],
16 PublicDnsName: '',
17 State: [Object],
18 StateTransitionReason: '',
19 SubnetId: 'subnet-06d0924b',
20 VpcId: 'vpc-579f512a',
21 Architecture: 'x86_64',
22 BlockDeviceMappings: [],
23 ClientToken: '3d07c3be-80f3-4ddf-a107-f1b9a4cee746',
24 EbsOptimized: false,
25 EnaSupport: true,
26 Hypervisor: 'xen',
27 ElasticGpuAssociations: [],
28 ElasticInferenceAcceleratorAssociations: [],
29 NetworkInterfaces: [Array],
30 RootDeviceName: '/dev/xvda',
31 RootDeviceType: 'ebs',
32 SecurityGroups: [Array],
33 SourceDestCheck: true,
12. AWS Elastic Compute Cloud (EC2) 107
34 StateReason: [Object],
35 Tags: [],
36 VirtualizationType: 'hvm',
37 CpuOptions: [Object],
38 CapacityReservationSpecification: [Object],
39 Licenses: [],
40 MetadataOptions: [Object],
41 EnclaveOptions: [Object]
42 }
43 ],
44 OwnerId: '<OWNER_ID>',
45 ReservationId: 'r-0dc71b485bd2eea55'
46 }
47 Created instance i-0609b562cd4c8d9ea
Here, we can take a look at the settings used to instantiate our EC2 instance. Currently, no public
DNS is available, in the response, but you can get it from the AWS dashboard if you’d like. Generally,
you’ll be accessing the instance through its id, which is provided here.
Taking a look at the dashboard, we now have two instances up and running:
To manage this instance, you can use any of the several functions provided for these purposes -
describeInstances(), monitorInstances(), startInstances(), stopInstances() and rebootInstances().
Let’s take a look at how we can stop this instance, since it’s already running, and how we can start
it back up again. Also, we can reboot this instance instead of stopping and starting in sequence.
Now with our instance running, and the instance id in hand, we can go ahead and stop it progra-
matically:
12. AWS Elastic Compute Cloud (EC2) 108
This is done in much the same way we perform other requests - set up the required params, pass it
to the adequate method, and read the response.
Here, the InstanceIds accepts an array, assuming that we might have multiple instances we want
to stop running. We’ve put one instance id in here - the one we’ve just created.
Let’s run this code:
1 $ node index.js
1 {
2 StoppingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-0609b562cd4c8d9ea',
6 PreviousState: [Object]
7 }
8 ]
9 }
12. AWS Elastic Compute Cloud (EC2) 109
1 $ node index.js
1 {
2 StartingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-0609b562cd4c8d9ea',
6 PreviousState: [Object]
7 }
8 ]
9 }
And a quick look confirms that the instance has started running:
When you’d like to reboot an instance - sure, you can just turn it off and on again, though, to avoid
duplicating code, you can just use the rebootInstances() function:
19 instancePromise.then(data => {
20 console.log(data);
21 });
1 $ node index.js
Finally, you might want to terminate an instance. Let’s go ahead and terminate both of these
instances, as running them racks up our bill, which is best avoided if we won’t use them immediately
after this:
1 $ node index.js
1 {
2 TerminatingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-08fd5ad87f9ede25e',
6 PreviousState: [Object]
7 },
8 {
9 CurrentState: [Object],
10 InstanceId: 'i-0609b562cd4c8d9ea',
11 PreviousState: [Object]
12 }
13 ]
14 }
Taking a look at the dashboard, we can see that both instances are terminated:
13. Serverless Computing
So far, throughout all of the Cloud Computing models we’ve talked about - one thing was common.
Although someone else maintained the servers, underlying infrastructure and operating systems,
you’d typically have access to this if you so desired.
For example, you can launch an empty EC2 instance and install the software you’d like or customize
it per your needs. You’re renting a virtual machine. The entire thing.
Through time, another Cloud Computing Model came to be - Function as a Service (FaaS). FaaS is
a Cloud Computing model in which you don’t rent out an entire virtual machine. Rather, you rent
out a bit of the computing power to run certain pieces of code and pay as you go.
Regardless of the provider, the flow looks pretty similar:
• AWS Lambda
• Google Cloud Functions
• Microsoft Azure Functions
• IBM/Apache’s OpenWhisk
• Oracle Cloud Fn
Functions as a Service are often used in the microservice architecture - where instead of a whole
microservice, you can spin up a simple function that’ll respond to a request, process it and forward
it to another service, or anything along those lines.
For example, uploading an image to a service can trigger a serverless function to process that image,
send a notification to the user or administrator, or update some visualization/analytics service with
the new info.
In terms of AWS - their Lambda service works wonders with many of their services. You can send
AWS SNS notifications to a Lambda function, kicking off a job. You can send queued messages using
SQS, respond to S3 events, react to real-time Kinesis data, transform data and store it into an RDS
database.
Here’s an example - a user uploads an image, and the service layer of your application triggers a
Lambda Function, as the image is being saved onto a file hosting service. This Lambda Function
13. Serverless Computing 114
then triggers an SNS Topic to publish an update message to the team members or users of the
website, informing them of the addition. It also kicks off an email service, which uses an email
service provider like Gmail to send a confirmation to the user who uploaded the image:
Really, you can make any combination of these services - instead of an email service provider, you
can also use SNS, or use SQS to store several images in a queue to be batch-processed at a later date.
Instead of the service layer triggering the Lambda function, you could’ve relied on S3 events, and
have an all-AWS system.
Effectively, AWS Lambda can be used in pretty much any back-end service to replace a simple
microservice or a task. What’s really useful about it is that you just input the code and let it run.
Nothing more, nothing less.
Typically, you’d combine a service like this with other services. Specifically, AWS API Gateway is a
common combination with Lambda. API Gateway is a simple service that listens to HTTP requests
and has a built-in communication path with Lambda.
Once you hit an endpoint, it triggers a Lambda function.
14. AWS Lambda
AWS Lambda works by creating containers for the functions you create using the AWS console. The
first time it’s run, it’ll take a bit to initialize the container - but all subsequent calls will be stable,
consistent and fast.
What makes Lambda very useful is that you can easily set what events trigger the execution of
Lambda code and what happens with that result.
For example, you can make an API that upon receiving a GET request, returns a response to Lambda,
which processes it and send the result to an SNS topic.
You can upload already written code to AWS Lambda or simply write it in their code editor. Let’s
start off by creating a Function using the AWS console.
Creating a Function
In the AWS console, under “Compute” products, you can find AWS Lambda:
You’ll be faced with three options. You can use a blueprint which will bootstrap you with some com-
mon boilerplate code. This is anything from running an external process using the child_process
module to a script that listens to S3, triggers upon upload and retrieves the metadata of the uploaded
file:
You’ll also be presented with the option to browse the serverless app repository. This differs from
the previous option, as here, you deploy functioning apps. Each of these have a GitHub link to their
source code:
14. AWS Lambda 117
These are great starting points and can help you out if you’re working a lot with AWS Lambda.
However, we’ll be authoring a function from scratch:
Feel free to choose a name for the function and the runtime for the container. We’ll be using Node,
obviously.
Finally, we’ve got to set an execution role for our function. You may not want the function to have
access to some other services, but have access to other services. For basic usage, feel free to select
the basic Lambda permissions:
14. AWS Lambda 118
You can also select existing roles here, in which you can be as detailed with the access as you’d like.
For now, we’ll start off with the basic role.
Clicking “Create Function” will create your function with the information you’ve provided:
Now, we can start coding the function. This can be achieved through three different ways - editing
it inline, uploading a ZIP file with code or uploading from S3.
Some sample code is already populated in the inline editor. If you go ahead and click “Test”, the
following message will appear in the on-screen console:
1 Response:
2 {
3 "statusCode": 200,
4 "body": "\"Hello from Lambda!\""
5 }
6
7 Request ID:
8 "e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57"
9
10 Function logs:
11 START RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57 Version: $LATEST
12 END RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57
13 REPORT RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57 Duration: 1.24 ms Billed Dura\
14 tion: 100 ms Memory Size: 128 MB Max Memory Used: 64 MB
If need be, you can set environment variables, tags, basic settings, etc. right below the code:
14. AWS Lambda 120
Here, you can see a trigger and a destination with the Lambda function being between them. This
is where the example before ties in - say we want to trigger the function from an API, have the
function process that request and send the results of that to another service, such as SNS.
First, let’s take care of the trigger:
14. AWS Lambda 121
Many things can be set as a trigger for AWS Lambda, and the API Gateway is a common one. The
AWS API Gateway is a really useful service that can be used to bootstrap APIs in seconds. You can
build RESTful or WebSocket APIs and have them served up for another service to use.
Let’s select the API Gateway and input some info about our API:
14. AWS Lambda 122
Here, you can choose between creating a new API or selecting an existing one. Since we don’t have
one already, let’s create it. It’ll be an HTTP API, with open security. Alternatively, for HTTP APIs,
you can create a JWT authorizer, and for a REST API, you can set an API key or an IAM.
In the additional settings, we’ve set the name and deployment stage of the API:
14. AWS Lambda 123
The API endpoint can be found on the highlighted link. Hitting that endpoint will return:
14. AWS Lambda 124
When we hit the API endpoint, it sends an event to the service it’s wired to in the designer. In our
case, it sends an event to the Lambda function. The default implementation of the function is:
The event parameter contains the current event info. In our case, it’s the event from the HTTP
request.
This event is picked up and the function just returns the response. Since this is analogous to a “Hello
World!” message, let’s change the code in the editor to something else:
16 return response;
17 };
The context contains all the information about the function itself. How long it has been running,
how much memory it’s consuming among other things. This is viewed as the runtime information.
Here, we’ve generated a random number, between 0 and 10. We put that number in a json object,
alongside the function’s name and memory limit, extracted from the context object.
Then, we’ve stringified this json object and put it in the body of the response which is returned by
Lambda.
Don’t forget to “Deploy” the code, using the “Deploy” button, which will commit the change to our
Lambda function. Then, let’s hit the endpoint again with a GET request via the browser:
Great, we’ve connected an API and returned the result of the Lambda function! It returned a random
number, the name of our function and the memory limit on our function.
Now, instead of just returning the result, let’s send it off to another service, like SNS. To do so, we’ll
add a destination for our result, in the designer:
14. AWS Lambda 126
Here, you’re faced with a few options. The Source in this sense is considered the Lambda function.
It can be an asynchronous or stream invocation. Stream invocations are for streaming/real-time
services such as Kinesis or DynamoDB. Since that’s not what we’re doing, we’ll be going with an
asynchronous invocation.
Depending on the service you’d like to invoke, your function will have to have certain roles. Here,
if we just try to use it, AWS will attempt to give our function the correct role for that service. If it
fails, you can always set the role yourself by going to “Permissions”:
14. AWS Lambda 127
Since we’re using the CLI, make sure that the IAM user you’re using has the AWSLambdaFullAccess
policy attached, or at least has permission to use lambda:InvokeFunction.
14. AWS Lambda 128
1 {
2 "StatusCode": 202
3 }
And we can verify that we’ve gotten an SMS and email from our SNS topic, with the information
regarding the call to SNS:
14. AWS Lambda 129
14. AWS Lambda 130
Here, we start up an AWS.SNS() instance, construct a json object with the information we want to
send, set up the parameters for SNS and publish the message.
NOTE: If you make this function async and await the sns.publish() call, or capture the Promise
returned from sns.publish().promise(), your function likely won’t work. The function will time
out and terminate before the Promise can be fulfilled and the message will never be sent to the topic.
Let’s hit our API Gateway endpoint again:
14. AWS Lambda 131
And this, in turn, calls the code from the Lambda function, which sends us an SMS message:
14. AWS Lambda 132
Additional Resources
Thank you for making your way to the end of the book. We hope you’ve enjoyed it and found it
informative.
If you found any bugs, typos or mistakes, please feel free to let us know, and we’ll update the book
as soon as possible.
Over time we will continually update this book by fixing issues, updating information, adding new
relevant content, etc.
Have any feedback on what could be fixed/changed/added? Feel free to contact us! Our
email is [email protected].
GitHub
¹⁸https://aws.amazon.com/sdk-for-node-js/
¹⁹https://forums.aws.amazon.com/forum.jspa?forumID=148
²⁰https://aws.amazon.com/developer/language/javascript/
²¹https://aws.amazon.com/new/?whats-new-content-all.sort-by=item.additionalFields.postDateTime&whats-new-content-all.sort-
order=desc
²²https://aws.amazon.com/blogs/aws/
²³https://aws.amazon.com/training/
²⁴https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/welcome.html
²⁵https://github.com/aws/aws-sdk-js
²⁶https://github.com/StackAbuse/sa-ebook-getting-started-with-aws-in-node-js-code