0% found this document useful (0 votes)
192 views

Getting Started With Aws in Node

The document discusses the issues with traditional file hosting and introduces cloud file hosting services as an alternative. Traditional methods require manually scaling storage capacity and introduce slow loading times when serving large files. Cloud file hosting services allow offloading storage to efficiently utilize server resources and improve security.

Uploaded by

8detect
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
192 views

Getting Started With Aws in Node

The document discusses the issues with traditional file hosting and introduces cloud file hosting services as an alternative. Traditional methods require manually scaling storage capacity and introduce slow loading times when serving large files. Cloud file hosting services allow offloading storage to efficiently utilize server resources and improve security.

Uploaded by

8detect
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 137

Sold to

[email protected]
Getting Started with Amazon Web Services
(AWS) in Node.js
StackAbuse
© 2020 StackAbuse
Copyright © by StackAbuse.com
Authored by Joshua Simpson, David Landup, Robley Gori
Contributions by Janith Kasun
Edited by David Landup
Cover design and illustrations by Jovana Ninković
The images in this book, unless otherwise noted, are the copyright of StackAbuse.com.
The scanning, uploading, and distribution of this book without permission is a theft of the content
owner’s intellectual property. If you would like permission to use material from the book (other than
for review purposes), please contact [email protected]. Thank you for your support!
First Edition: November 2020
Published by StackAbuse.com, a subsidiary of Unstack Software LLC.
The publisher is not responsible for links, websites, or other third-party content that are not owned
by the publisher.
Contents

1. Getting Started with Amazon Web Services (AWS) in Node.js . . . . . . . . . . . . . . . . . 1

2. Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

3. Cloud File Hosting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

4. AWS Simple Storage Service (S3) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

5. Messaging Support - SNS and SQS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6. AWS Simple Notification Service (SNS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

7. AWS Simple Queue Service (SQS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

8. Pairing SNS and SQS Together . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

9. Database Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

10. AWS Relational Database Service (RDS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

11. Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

12. AWS Elastic Compute Cloud (EC2) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

13. Serverless Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

14. AWS Lambda . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Additional Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133


1. Getting Started with Amazon Web
Services (AWS) in Node.js
Amazon Web Services¹ (AWS) is a cloud computing provider with a number of extremely popular
services. Ever since their launch back in 2006, they’ve become a key player in the development and
deployment of major enterprise applications. Their services are scalable, flexible, and groundbreak-
ing in many aspects, while keeping the cost relatively low compared to self-hosting.
These are just some of the reasons why major companies like Adobe, Airbnb, Autodesk, BMW, the
European Space Agency, Ticketmaster, Xiaomi, Twitch, Netflix, Facebook, LinkedIn, Twitter, etc.
started hosting their applications on the AWS platform.
There are many Amazon Web Services in existence, and the list keeps growing. At the moment,
175 fully-featured services are being offered by Amazon. Of course, the usage of these isn’t equally
distributed - some have seen usage in a large number of applications while some are fairly unknown.
Some of the most used services include:

• Amazon Simple Storage Service (S3)


• Amazon Elastic Compute Cloud (EC2)
• Amazon Web Service Lambda
• Amazon Glacier
• Amazon Simple Notification Service (SNS)
• Amazon Simple Queue Service (SQS)
• Amazon Relational Database Service (RDS)

We’ve compiled this beginner-level book aiming at novice developers who’d like to get to know AWS
through concrete, practical examples and useful tips. We’ll be covering S3, EC2, Lambda, SNS, SQS
and RDS in an attempt to get our readers up to speed with some of AWS’ most popular services.
Each chapter will feature an introduction to the topic and a problem that the relevant service solves,
followed by a setup and a demo application.
By the end of this book, you should be able to provision scalable AWS-integrated JavaScript
applications with a solid knowledge foundation and to get up to speed with any of the other services
offered by this tech-giant.
¹https://aws.amazon.com/
2. Prerequisites
Node.js

Needless to say, to follow the contents of the book, you’ll want to have Node.js installed on your
machine. We’ll be using npm to install modules such as Express that’ll help us build demonstration
applications faster and easier.

Amazon Web Services Account

Amazon Web Services (AWS) provides a collection of tools for building applications in the cloud. To
interact with them and use them, you’ll obviously need an AWS account.
If you don’t already have one, head over to the AWS frontpage and sign up for an account. Depending
on your usage, select either a Professional or Personal account. If you intend to use it within a
company or an educational institution/organization - you should select the Professional option.
AWS has a free tier² for a lot of awesome stuff! DynamoDB, AWS Lambda, Amazon SNS, Glacier,
SES, etc. are always free, though some monthly limitations are imposed.
Services like EC2, S3, RDS, API Gateway, Cloud Directory, etc. are free for 12 months, with similar
limitations of monthly usage.
And finally, services like SageMaker, Lightsail, GuardDuty, etc. offer a free trial, after which you’ll
have to open your wallet to continue using them.
Connecting an AWS account with our applications can be done in several ways - directly through
credentials, through an IAM user, via the CLI…
For each application, we’ll use a different way to connect an application to our account. Feel free to
use the one you prefer, as they’re interchangeable.

PLEASE NOTE: AWS, like all software, is continually being updated, both in terms of
design and functionality. We’ve done our best to use the latest versions and updates in
this book.
The user interfaces you’ll be seeing on your machine, at the time of reading, may be
slightly or significantly different from the interfaces seen in the book. We’ll update the
book when a new interface is released in an attempt to keep it as up-to-date as we can.
²https://aws.amazon.com/free/
2. Prerequisites 3

Postman and curl

Postman³ is a useful tool for creating and sending requests. We’ll be using it in some of the chapters
to test out our endpoints.
Postman is optional, and really, you can use any tool to test out the endpoints, even your internet
browser. We’ll also be using curl in some chapters, due to its simplicity.

Docker

Docker⁴ allows us to bundle up our applications into small, easily deployable units that can be
run anywhere where Docker is installed. This means no more of ‘but it works on my machine!’
headaches.
This book will assume basic familiarity with Docker, and won’t be going into any depth on it.
³https://www.postman.com/
⁴https://www.docker.com/
3. Cloud File Hosting
Much of the software and web apps we build today require some kind of hosting for files - images,
invoices, audio files, etc. The traditional way to store files was just to just save them on the server’s
drives.
This requires our server’s drives to have a size large enough to reasonably be able to hold the data
we might want it to save.
Once the drives are filled up, we’d have to manually insert new drives, or pay a service provider for
more space.
This introduces us to a step-like upgrading path, where an investment is made for the initial size,
then for each new size upgrade, an investment is required which will suffice for a set amount of
time. This makes scaling hard and expensive. Much thought would have to go into how much data
can be saved, how the data is saved and how it is handled.
This also requires the developers to dabble with the file system, which might change between servers
which further complicates things.
Additionally, what if these files are large? It’s not uncommon to save and serve large images, video
or audio files. Storing a large amount of these can be fixed with expensive hardware upgrades, such
as installing more storage drives. However, what happens when the end-user wants to access this
data?
An index page of a website that just serves a static file can be as small as 1KB. Larger files with
hundreds of lines, that also import various JavaScript files of medium to large size can easily amount
to ∼100KB in size.
That size, compared to a 5MB size of a high quality image is 500 times smaller.
Loading an image such as that, alongside a small HTML file can lead to slow loading times and bad
user experience, even if we load the image lazily while the user is already on the page:
3. Cloud File Hosting 5

This introduces an unproportionable and unnecessary strain on the server’s resources. The same
amount of resources that are required to serve a single large image can be used to serve hundreds
of pages to other users.
To offload the servers, developers started hosting files with cloud storage providers such as AWS S3,
Google Cloud Storage, etc.
These services allow developers to ditch having to store the files themselves, handle their structures
and mess around with the file system.
Instead, the file hosting service takes care of this for them. Additionally, a huge amount of resources
is released back into the server and application, which can then run faster and serve other, smaller
files:

Another significant benefit is security. If these files contain sensitive data, a developer will spend a
long time securing the file storage system. This takes away time from development, and focuses it
on things that a developer doesn’t need to take care of to get the application up and running.
4. AWS Simple Storage Service (S3)
Amazon’s solution to file hosting is AWS Simple Storage Service, most often referred to as S3.
S3, or Simple Storage Service, is a cloud storage service provided by Amazon Web Services. Using S3,
you can host any number of files while paying for only what you use.
S3 also provides multi-regional hosting to customers by their region and thus are able to really
quickly serve the requested files with minimum delay.
The service is based on Buckets - which are analogous to having a traditional server. Each of these
buckets can be set up in a different region to optimize speed.

Setting up the Environment


Let’s go ahead and set up an S3 Bucket for use, after which, we’ll set up our development environ-
ment. You can also create a bucket programmatically, through Node.js, which is covered right after
we set up the development environment.

AWS Credentials

To get started, you need to generate the AWS Security Key Access Credentials first. Using them,
you’ll give your application access to your account. You can do this through an IAM (Identity and
Access Management) user or directly. We’ll be setting up an IAM in the next chapter, so for this
application, you’ll directly get access to your account.
To do so, login to your AWS Management Console and click on your username:
4. AWS Simple Storage Service (S3) 7

Then select Access Keys -> Create New Access Key:

After that you can either copy the Access Key ID and Secret Access Key from this window or you
can download it as a .CSV file:
4. AWS Simple Storage Service (S3) 8

Creating an S3 Bucket

Now let’s create an AWS S3 Bucket with proper access. We can do this using the AWS management
console or by using Node.js.
To create an S3 bucket using the management console, go to the S3 service by selecting it from the
service menu:

Select “Create Bucket” and enter the name and region for it. If you already know from which region
the majority of your users will come from, it’s wise to select a region as close to theirs as possible.
This will ensure that the files from the server will be served in a more optimal timeframe.
4. AWS Simple Storage Service (S3) 9

The name you select for your bucket must be a unique name among all AWS users, so try a new one
if the name is not available:

Follow through the wizard and configure permissions and other settings per your requirements. By
default, you’ll have some options turned on or off. These are options such as blocking public access
and bucket versioning.
To create the bucket using Node.js, we’ll first have to set up our development environment.

Development Environment

Let’s get started with an example by configuring a new Node.js project:

1 $ npm init -y

To start using any AWS Cloud Services in Node.js, we have to install the AWS SDK (System
Development Kit).
Install it using your preferred package manager like npm:

1 $ npm i --save aws-sdk

Once aws-sdk is installed, it’ll be automatically added to the dependencies section of our package.json
file thanks to the --save option.
4. AWS Simple Storage Service (S3) 10

Demo Application

Creating an S3 Bucket

If you have already created a bucket manually, you may skip this part. But if not, create a file, say,
create-bucket.js in your project directory.
Import the aws-sdk library to access your S3 bucket:

1 const AWS = require('aws-sdk');

Now, let’s define three constants to store the ID, SECRET, and BUCKET_NAME. These are used to identify
and access our bucket:

1 // Enter copied or downloaded access ID and secret key here


2 const ID = '';
3 const SECRET = '';
4
5 // The name of the bucket that you have created
6 const BUCKET_NAME = 'test-bucket-2222';

Now, we need to initialize the S3 interface by passing our access keys:

1 const s3 = new AWS.S3({


2 accessKeyId: ID,
3 secretAccessKey: SECRET
4 });

With the S3 interface successfully initialized, we can go ahead and create the bucket:

1 const params = {
2 Bucket: BUCKET_NAME,
3 CreateBucketConfiguration: {
4 // Set your region here
5 LocationConstraint: 'us-east-1'
6 }
7 };
8
9 let promiseResult = s3.createBucket(params).promise();
10
11 promiseResult.then(data => {
12 console.log('Bucket Created Successfully', data.Location);
13 }).catch(err => {
14 console.error(err, err.stack);
15 });
4. AWS Simple Storage Service (S3) 11

The data object contains useful information about the object (in this case, Bucket) we’re working
with. Here, we’ve extracted the Location parameter, which holds the URL of the bucket we’ve
created.
You can use a callback instead of promises here as well.
At this point we can run the code and test if the bucket is created on the cloud:

1 $ node create-bucket.js

If the code execution is successful you should see the success message, followed by the bucket address
in the output:

1 Bucket Created Successfully http://test-bucket-2415soig.s3.amazonaws.com/

You can visit your S3 dashboard to verify that the bucket is created:

Uploading Files

At this point, let’s implement the file upload functionality. In a new file, e.g. upload.js, import the
aws-sdk library to access your S3 bucket and the fs module to read files from your computer:

1 const fs = require('fs');
2 const AWS = require('aws-sdk');

We need to define three constants to store again - ID, SECRET, and BUCKET_NAME and initialize the S3
client as we did before.
Now, let’s create a function that accepts a fileName parameter, representing the file we want to
upload:
4. AWS Simple Storage Service (S3) 12

1 const uploadFile = (fileName, bucketName) => {


2 // Read content from the file
3 const fileContent = fs.readFileSync(fileName);
4
5 // Setting up S3 upload parameters
6 const params = {
7 Bucket: bucketName, // Name of your bucket, passed to the function
8 Key: 'cat.jpg', // File name you want to save as in S3
9 Body: fileContent
10 };
11
12 // Uploading file to the bucket
13 let promiseResult = s3.upload(params).promise();
14
15 promiseResult.then(data => {
16 console.log(`File uploaded successfully. ${data.Location}`);
17 }).catch(err => {
18 console.error(err, err.stack);
19 });
20 };

What we’ve done here is read the file’s contents as a buffer. After reading it, we can define the
needed parameters for the file upload, such as Bucket, Key, and Body - passing the buffer with the
file contents as the body.
Besides these three parameters, there’s a long list of other optional parameters. To get an idea of the
things you can define for a file while uploading, here are a few useful ones:

• StorageClass: Define the class you want to store the object. S3 is intended to provide fast file
serving. But in case files are not accessed frequently you can use a different storage class. For
an example, if you have files which are hardly touched you can store in “S3 Glacier Storage”
where the price is very low compared to “S3 Standard Storage”. But it will take more time to
access those files in case you need it and is covered with a different service level agreement.
• ContentType: Sets the image MIME type. The default type will be binary/octet-stream.
Adding a MIME type like image/jpeg will help browsers and other HTTP clients to identify
the type of the file.
• ContentLength: Sets the size of the body in bytes, which comes in handy if body size cannot
be determined automatically.
• ContentLanguage: Set this parameter to define which language the contents are in. This will
also help HTTP clients to identify or translate the content.

For the Bucket parameter, we’ll use our bucket name, whereas for the Key parameter we’ll add the
file name we want to save as, and for the Body parameter, we’ll use fileContent.
With that done, we can upload any file by passing the file name to the function:
4. AWS Simple Storage Service (S3) 13

1 uploadFile('cat.jpg', BUCKET_NAME);

You can replace cat.jpg with a file name that exists in the same directory as the code, a relative file
path, or an absolute file path.
At this point, we can run the code and test out if it works:

1 $ node upload.js

If everything is fine, you should see an output with the location of the data you just uploaded:

1 File uploaded successfully. https://test-bucket-2222.s3.ap-northeast-2.amazonaws.com\


2 /cat.jpg

If there are any errors, they’ll be displayed on the console as well.


Additionally, you can go to your bucket in the AWS Management Console and make sure the file is
uploaded:

Download Files

Sometimes, you might want to download a file from an S3 bucket. This may be done for additional
processing or editing on the file, after which, you’d upload it again, or store it on another service or
computer.
Let’s make a downloadFile() function for this:
4. AWS Simple Storage Service (S3) 14

1 const downloadFile = (filePath, bucketName, key) => {


2 // Setting up S3 parameters
3 const params = {
4 Bucket: bucketName, // Name of your bucket
5 Key: key // Name of the file you want to download
6 };
7
8 // Get the object
9 s3.getObject(params, function(err, data) {
10 if (err) {
11 throw err;
12 }
13 fs.writeFileSync(filePath, data.Body);
14 console.log('File downloaded successfully.');
15 console.log(data);
16 });
17 };

Running this code will download the file with the corresponding key from the bucket specified by
the bucketName, and save that file on the filePath.
If we call:

1 downloadFile('cat.jpg', BUCKET_NAME, 'cat.jpg');

We’ll be greeted with our cat.jpg file that we uploaded in the previous step, located in the project
folder, named - cat.jpg. You can put a more elaborate filePath if you’d like.
We’re also greeted with the output:

1 File downloaded successfully.


2 {
3 AcceptRanges: 'bytes',
4 LastModified: 2020-11-01T23:25:24.000Z,
5 ContentLength: 1246129,
6 ETag: '"e08f8b00704d429c90e151bad272dc28"',
7 ContentType: 'image/jpeg',
8 Metadata: {},
9 Body: <Buffer 89 50 4e 47 0d 0a 1a 0a 00 00 00 0d 49 48 44 52 00 00 04 b0 00 00 02\
10 58 08 02 00 00 00 fd 84 88 4d 00 00 00 19 74 45 58 74 53 6f 66 74 77 61 72 65 00 ..\
11 . 1246079 more bytes>
12 }

As you can see, the file is downloaded as a buffer, which we can use with fs to write it into a file.
4. AWS Simple Storage Service (S3) 15

Deleting Files

Similar to the previous two functions, with the same imports, we’ll also define a deleteFile()
function. It follows the same format - we specify the bucket we’re working with, the key we want
to delete and set up the required parameters for the s3 object.
Then, all we have to do is call the deleteObject() function, with that information:

1 const deleteFile = (bucketName, key) => {


2 // Setting up S3 parameters
3 const params = {
4 Bucket: bucketName, // Name of your bucket
5 Key: key // Name of the file you want to delete
6 };
7
8 // Delete the object
9 let promiseResult = s3.deleteObject(params).promise();
10
11 promiseResult.then(data => {
12 console.log(`File deleted successfully. ${data}`);
13 }).catch(err => {
14 console.error(err, err.stack);
15 });
16 };

Now, if we wanted to delete out cat.jpg file from the bucket, we could call the function:

1 deleteFile(BUCKET_NAME, 'cat.jpg')

And we’d be greeted with:

1 File deleted successfully.


2 {}

And now, the S3 bucket is empty:


4. AWS Simple Storage Service (S3) 16
5. Messaging Support - SNS and SQS
When working on distributed applications which have many independent parts working together -
communication between them is paramount. The way we notify other services that there has been
an update or a result and the way they react (or don’t) to that update is a fine line between a seamless
user experience and total chaos.
Message-Driven Architecture (MDA) exists for this very reason. It’s an abstract idea of having many
independent services/components that talk to each other via messages. Say, a user is registering to an
application. The Registration Service takes care of the registration. Upon finishing its task, it sends a
message to the Email Notification Service, notifying it of the recent registration. This results in the
Email Notification Service sending a welcome email to the user.
This makes both services loosely coupled, but highly cohesive. It’s easy to test them as they’re
independent, and they perform their respective jobs regardless of other services. If the Email
Notification Service fails to send the email, the registration is still complete and fully valid.
A message is also an abstract concept - and messages are typically implemented as either Events or
Commands.
Events are emitted upon completion of a task. For example, an event is sent out from the Registration
Service when a user registers. Many services can subscribe to this event and react accordingly. One
service can run an update on application statistics, one service can notify the user, one service can
trigger a database backup, etc.
Commands are sent out in order to complete a task. For example, a command is sent out from the
Registration Service to other services that do their respective jobs. The result is the same, though the
way these messages are transmitted is different.
An event doesn’t have a recipient. It’s almost like firing a flare gun into the sky - it doesn’t have a
target, though everyone looking out for the flare, knows what to do when they see it:
5. Messaging Support - SNS and SQS 18

event illustration

By contrast, a command has a clear target or multiple targets. The commands are sent off to
individual recipients who then respond to that command:

command illustration
6. AWS Simple Notification Service
(SNS)
A lot of technology that we see relies on a very immediate request/response cycle - when you make
a request to a website, you get a response containing the website you requested, ideally seemingly
immediately. This all relies on the user making the active decision to request that data.
A very different, though very common model is the ‘publish/subscribe’ model, often referred to
as the Pub/Sub model. The AWS Simple Notification Service (SNS) is a super scalable service that
allows users to implement the publish/subscribe model with ease.
This allows us to send texts, emails, push notifications, or other automated messages to other targets
across multiple channels at the same time.
In this chapter, we’ll build a web application that publishes messages to multiple subscribers, using
SNS. There are multiple ways we can go about this. We can send informational messages via email
or SMS, but we can also send messages to other web applications via HTTP.

The Publish/Subscribe Model


The publish/subscribe model is a way to achieve event-driven architecture and consists of two
components in a system:

• Publisher: A service that can broadcast out messages to other services listening (subscribed)
to it.
• Subscriber: Any service that the publisher will broadcast to.

In order to become a subscriber, a service needs to notify the publisher that it wants to receive its
broadcasts, as well as where it wants to receive those broadcasts at - at which point the publisher
will include it in its list of targets when it next publishes a message.
A good metaphor for the publish/subscribe model is any newsletter that you’ve signed up for! In
this instance, you have actively gone to the publisher and told them you want to subscribe as well
as given them your email.
Nothing is done immediately once you’ve subscribed, and you don’t receive any previous issues of
the newsletter.
When the publisher publishes a message (sends out their monthly newsletter) - an email arrives. You
can then choose to do what you will with the email - you could delete it, read it, or even act on some
of the details in it.
6. AWS Simple Notification Service (SNS) 20

Setting up an SNS Topic


To get started, we first need to set up a topic on AWS SNS. A topic is what we would consider a
‘publisher’ - we can send messages to a topic, which it will then publish to all of its subscribers.
Before we get started, for the SMS part of this chapter, you’ll need to make sure you’re setting up
your resources in a region that has access to it. In the top right corner of your AWS dashboard, click
the region name (the default is Ohio) and change it to the closest region to you that is in this list⁵.
At your AWS dashboard, select ‘Simple Notification Service’ and hit ‘Topics’ on the left hand side,
followed by the ‘Create topic’ button.
You’ll be presented with a screen that asks you to provide some basic information about the SNS
Topic:

Create Topic Image

This screen has several options, although only displays one group by default - this is the name (which
is mandatory) and the display name, which can be optionally set - this name is used if you publish
to SMS subscribers from the topic.
You’ll also be prompted to specify if you want to use a FIFO or Standard topic type. This cannot
be modified after the topic has been created. If you specifically require the message-order to be
preserved, you’ll want to use the FIFO type. This goes hand-in-hand with AWS SQS, which is covered
in the next chapter as well. However, this somewhat limits the throughput of messages - down to
300 publishes per second.
⁵https://docs.aws.amazon.com/sns/latest/dg/sns-supported-regions-countries.html
6. AWS Simple Notification Service (SNS) 21

This is by no means a small amount, but with really large topics, it can be a factor.
For other purposes, you’d want to use a Standard topic type - which doesn’t necessarily preserve
the order of messages if you send them in batches, but it offers the highest throughput you can get
at that time. It also has support for more subscription protocols, such as SQS, Lambda, HTTP, SMS,
email and even mobile application endpoints.
We’ve chosen the Standard topic type, since we’ll be working with HTTP, SMS and email in this
chapter, as well as SQS in the following chapters.
Some of the other options include:

• Message Encryption: Encrypting messages after being sent by the publisher. This really is only
useful if you’re sending highly sensitive/personal data.
• Access Policies: Defines exactly who can publish messages to the topic.
• Retry Policy: In case a subscriber fails to receive a published message for whatever reason.
• Delivery Status Logging: Allows you to set up a IAM (Identity and Access Management) role
in AWS that writes delivery status logs to AWS CloudWatch.

For now, we’re going to fill in a name and a display name, select the topic type, and then scroll to
the bottom and hit ‘Create topic’. Take note of the ARN ⁶ of the newly created topic, as we’ll need it
later.

Setting up an IAM User


Just like with S3, we’ll be using the AWS JavaScript SDK to interact with AWS SNS - and to be able
to do that, we’ll need a set of credentials that the SDK can use to send requests to AWS.
This time around, we’ll be creating an IAM user. You can work with credentials, just like last time,
though, sometimes it’s not wise to give everyone credentials to the Root User. You can also create
an IAM user, with these same steps, for working with S3 if you’d like.
Open the ‘Services’ menu that we used to search earlier, and this time search for IAM. You’ll see a
screen that looks like this:

IAM Main View

⁶https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
6. AWS Simple Notification Service (SNS) 22

Click ‘Users’ on the left, then select ‘Add user’ - you’ll be faced with a screen that looks like this:

SNS user creation

Let’s create a user with the name SNSUser, and check the box for programmatic access. We’ll want
to access it through our application programmatically, not only through the AWS console.
This allows anybody with the credentials to access specific parts of AWS via the CLI, or the JavaScript
SDK we’re going to use. We don’t need to give them AWS Management Console access, as we don’t
plan on having those credentials interact with AWS through a browser, like we’re doing now. The
users with this access will be able to access AWS programmatically, and that’s it.
Click ‘Next’, and you’ll be presented with permissions. Click on the ‘Attach existing policies directly’
button and by searching ‘SNS’, you’ll easily be able to find the ‘SNSFullAccess’ option:
6. AWS Simple Notification Service (SNS) 23

Attach policy view

IAM users, roles, and policies are all a big topic that is definitely worth investigating - for now
though, this should work for us.
By hitting ‘Next: Tags’ in the bottom right corner, and then ‘Next: Review’ in the same location, you
should see a summary screen that looks something like this:

IAM user review


6. AWS Simple Notification Service (SNS) 24

Note: Make sure you copy the Access Key ID and Secret Access Key or download the .CSV file as this
is the only time you can fetch these credentials - otherwise you’ll need to create new user.
Whilst we’re talking about credentials, again, make sure you do not post these credentials anywhere
online, or commit them to a Git repository. Bad actors will scour GitHub for repositories with
credentials in them so that they can get access to, and use resources on, their AWS account, which
will cost you some sweet money.
Finally, we’re going to set our credentials locally so that our Node application can use them in the
next step.
The default region in AWS is us-east-2 - which is based in Ohio. If you changed it while setting up
your SNS topic, set this to the corresponding region (for example, eu-west-1):

1 touch ~/.aws/credentials
2 echo '[sns_profile]' >> ~/.aws/credentials
3 # The access key ID from the IAM user
4 echo 'aws_access_key_id = <YOUR_ACCESS_KEY_ID_HERE>' >> ~/.aws/credentials
5 # The secret access key from the IAM user
6 echo 'aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY_HERE>' >> ~/.aws/credentials
7 # From the regions page, examples include: us-east-1, us-west-1, eu-west-1, etc.
8 echo 'region = <YOUR_AWS_REGION_HERE>' >> ~/.aws/credentials

Select the closest available region to yours for optimal performance.


The credentials file’s location depends on your operating system. If you’re on Linux or MacOS,
you’ll find both the credentials and config files under:

1 ~/.aws/config
2 ~/.aws/credentials

If you’re on Windows, they’ll be under:

1 %USERPROFILE%\.aws\config
2 %USERPROFILE%\.aws\credentials

Demo Application
The application will have a few ways of notifying subscribers and we’ll be starting off with emails.
We’ll bootstrap it with Express and it’ll have two endpoints. The first will be for adding subscribers
to our topic, and the second will be for sending a notification to all of our subscribers.
Note: With the email approach, we’re laying down the foundation for other approaches as well. The
difference between sending SMS, HTTP or email notifications is just a few parameters in the calls
to SNS.
Firstly, let’s create a folder for our project in the terminal, move into the directory, and initialize our
Node app with the default settings:
6. AWS Simple Notification Service (SNS) 25

1 $ mkdir node-sns-app
2 $ cd node-sns-app
3 $ npm init -y

Next, we need to install the Express and AWS-SDK modules so that we can use them both:

1 $ npm install express --save


2 $ npm install aws-sdk --save

Next, in the same directory, make a file called index.js:

1 $ touch index.js

This file will initially contain the boilerplate code required to set up a basic Express app/server, as
well as the code required to connect to AWS. We’ll instantiate a credentials constant, from the
∼/aws/credentials file we’ve created in the previous steps, for the profile we’ve set within it.

This const credentials will be passed to the AWS.SNS constructor, alongside our preferred region.
This in turn gives us an sns instance that we can use to send messages to topics existing on our
account:

1 const express = require('express');


2 const app = express();
3
4 const AWS = require('aws-sdk');
5 const credentials = new AWS.SharedIniFileCredentials({profile: 'sns_profile'});
6 const sns = new AWS.SNS({credentials: credentials, region: 'eu-west-2'});
7
8 const port = 3000;
9
10 app.use(express.json());
11
12 app.get('/status', (req, res) => res.json({status: 'ok', sns: sns}));
13
14 app.listen(port, () => console.log(`SNS App listening on port ${port}!`));

For the /status endpoint, we’ve simply returned the JSON contents of our sns instance. Now, we
can go ahead and run the file, just to check if AWS is working correctly with our application:

1 $ node index.js
6. AWS Simple Notification Service (SNS) 26

Visiting localhost:3000/status will print out a big chunk of JSON that has your AWS credentials
in it:

localhost status page with credentials

If that works, then we can move on and create our endpoints.


Note: Make sure to remove the credentials from the status endpoint before pushing your application
anywhere public.

Subscription Endpoints

First, we need to add a POST endpoint for adding subscribers. Below the /status endpoint, we’ll
add the /subscribe endpoint:

1 app.post('/subscribe', (req, res) => {


2 let params = {
3 Protocol: '<PROTOCOL>',
4 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
5 Endpoint: <ENDPOINT>
6 };
7
8 sns.subscribe(params, (err, data) => {
9 if (err) {
10 console.log(err);
11 } else {
12 console.log(data);
13 res.send(data);
14 }
15 });
16 });

If you’re not constrained to using only callbacks, AWS supports the new promise() function, which
can be used to promisify your code very easily.
Instead, we could write:
6. AWS Simple Notification Service (SNS) 27

1 let promiseResult = sns.subscribe(params).promise();


2
3 promiseResult.then(data => {
4 console.log(data);
5 }).catch(err => {
6 console.error(err, err.stack);
7 });

Okay, let’s walk through this. First, we’re creating a POST endpoint. Inside of that endpoint we’re
creating a params variable ready to hand to our subscribe request off to SNS.
The variable needs a few things:

• Protocol: This could be HTTP/S, EMAIL, SMS, SQS (if you want to use AWS’ queueing service),
or even a Lambda function.
• TopicArn: This is the ARN - a unique identifier for the SNS topic you set up earlier. If you don’t
have it, go and grab it from your Topic in your browser and paste it in the code now.
• Endpoint: The endpoint type depends on the protocol. Because we’re sending emails, we would
give it an email address, but if we were setting up an HTTP/S subscription, we would put a URL
address instead, or a phone number for SMS.

Since we’d like to send emails, we’ll se our parameters like so:

1 let params = {
2 Protocol: 'EMAIL',
3 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
4 Endpoint: req.body.email
5 };

For an SMS endpoint, we’d be setting the parameters as:

1 let params = {
2 Protocol: 'SMS',
3 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
4 Endpoint: req.body.number
5 };

The subscribe() function accepts these parameters and performs the subscription. It effectively
subscribes an endpoint to an AWS SNS topic. For HTTP and email subscriptions, the owner of the
endpoint must perform the ConfirmSubscription action to actually subscribe itself.
For emails, this is a confirmation email, asking them if they’d like to subscribe. For HTTP, it’s a bit
different, which is covered a bit later in the chapter. For SMS, you don’t need to do this, and this is
a feature we’ll be utilizing in a later chapter.
6. AWS Simple Notification Service (SNS) 28

Publishing Endpoints

Once we can subscribe to a topic, we’ll want to make an endpoint that will actually send out the
notifications to the subscribers:

1 app.post('/send', (req, res) => {


2 let params = {
3 // Parameters depend on your endpoint type
4 };
5
6 sns.publish(params, function(err, data) {
7 if (err) console.log(err, err.stack);
8 else console.log(data);
9 });
10 });

Just like last time, the parameters depend on which type of endpoint will be receiving the notifi-
cation. These will be a combination of the TopicArn, TargetArn, PhoneNumber, Message, Subject,
MessageStructure and MessageAttributes parameters.

The same possibility of using promises is present here as well:

1 let promiseResult = sns.publish(params).promise();


2
3 promiseResult.then(data => {
4 console.log(data);
5 }).catch(err => {
6 console.error(err, err.stack);
7 });

Now, let’s go one-by-one and implement the nuanced subscription and publishing endpoints.

Email Endpoint

Let’s start off with a handler that subscribes an email to the topic:
6. AWS Simple Notification Service (SNS) 29

1 app.post('/subscribe-email', (req, res) => {


2 let params = {
3 Protocol: 'EMAIL',
4 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
5 Endpoint: req.body.email
6 };
7
8 let promiseResult = sns.subscribe(params).promise();
9
10 promiseResult.then(data => {
11 console.log(data);
12 }).catch(err => {
13 console.error(err, err.stack);
14 });
15 });

Once this is in, start your server again. You’ll need to send a request with a JSON body to your
application - you can do this with tools like Postman, or if you prefer you can do it on the CLI:

1 $ curl -X POST -H "Content-type: application/json" -d "{\"email\" : \"<EMAIL_ADDRESS\


2 >\"}" "localhost:3000/subscribe-email"

Note: If you receive a InvalidParameter: Invalid Parameter: TopicArn error, the first thing you
should check is your region. If the Topic is set to a different region than the one in your credentials
file, the parameter will be invalid.
If the endpoint and message are correct, that email address will receive an email asking if you wanted
to confirm your subscription - any subscription created via AWS SNS needs to be confirmed by the
endpoint in some form, otherwise AWS could be used maliciously for spam or DDOS type attacks.
You’ll also be greeted with a JSON response from AWS SNS:

1 {
2 ResponseMetadata: { RequestId: '2c22c687-79b2-5b5e-bea8-9e9d174bd34e' },
3 SubscriptionArn: 'pending confirmation'
4 }

Now, depending on your email service provider, the confirmation email might end up in “Spam” so
make sure to check that inbox as well:
6. AWS Simple Notification Service (SNS) 30

And once you confirm the subscription:

Now, let’s check if the subscription is indeed there on our AWS dashboard:
6. AWS Simple Notification Service (SNS) 31

This action sends the ConfirmSubscription action to SNS.


Okay, now that we’ve confirmed the successful subscription of an endpoint, let’s make an endpoint
that send out the notification to it via email:

1 app.post('/send', (req, res) => {


2 let params = {
3 Message: req.body.message,
4 Subject: req.body.subject,
5 TopicArn: '<YOUR_TOPIC_ARN_HERE>'
6 };
7
8 let promiseResult = sns.publish(params).promise();
9
10 promiseResult.then(data => {
11 console.log(data);
12 }).catch(err => {
13 console.error(err, err.stack);
14 });
15 });

Again, let’s take a look at what the parameters here are made of:

• Message: This is the message you want to send - in this case, it would be the body of the email.
• Subject: This field is only included because we’re sending an email - this sets the subject of
the email.
• TopicArn: This is the Topic that we’re publishing the message to - this will publish to every
email subscriber for that topic.

You can send a message now using Postman, or curl - so long as we’re passing in our parameters
for the subject and message:
6. AWS Simple Notification Service (SNS) 32

1 $ curl -X POST -H "Content-type: application/json" -d "{\"subject\" : \"Hello There!\


2 \", \"message\" : \"You just received an email from SNS!\"}" "localhost:3000/send"

This is the response we get from our application:

1 {
2 ResponseMetadata: { RequestId: 'd6ec24b4-c6a0-53ae-a687-7caeb78b3a99' },
3 MessageId: '0a6dc748-ae52-5172-a14d-9f52df79d845'
4 }

Once this request is made, all subscribers to the endpoint should receive this email! Congratulations,
you’ve just published your first message using SNS and Node!

SMS Endpoint

If you remove the subject field, you can send 160 character SMS messages to any subscribed phone
number(s). Let’s make a /subscribe-sms handler:

1 app.post('/subscribe-sms', (req, res) => {


2 console.log(req);
3 let params = {
4 Protocol: 'SMS',
5 TopicArn: '<YOUR_TOPIC_ARN_HERE>',
6 Endpoint: req.body.number
7 };
8
9 let promiseResult = sns.subscribe(params).promise();
10
11 promiseResult.then(data => {
12 console.log(data);
13 }).catch(err => {
14 console.error(err, err.stack);
15 });
16 });
6. AWS Simple Notification Service (SNS) 33

Restart your application, and then make a request to your app with the phone number you want to
subscribe with:

1 $ curl -X POST -H "Content-type: application/json" -d "{\"number\": \" <YOUR_NUMBER_\


2 WITH_AREA_CODE_HERE>\"}" "localhost:3000/subscribe-sms"

Make sure the number you sent begins with +(your area code). The Node application will return
a lengthy JSON response from SNS for this:

1 IncomingMessage {
2 _readableState: ReadableState {
3 objectMode: false,
4 highWaterMark: 16384,
5 buffer: BufferList { head: null, tail: null, length: 0 },
6 length: 0,
7 pipes: null,
8 pipesCount: 0,
9 flowing: true,
10 ended: true,
11 endEmitted: true,
12 reading: false,
13 sync: false,
14 needReadable: false,
15 emittedReadable: false,
16 ...

And we can see the number in our subscription list, alongside the email:
6. AWS Simple Notification Service (SNS) 34

Note that we didn’t have to confirm this subscription. It’s approved automatically.
Finally, let’s create the endpoint to send an SMS to any subscribed phone(s):

1 app.post('/send-sms', (req, res) => {


2 let message = `${req.body.message}`
3 let params = {
4 Message: message,
5 TopicArn: '<YOUR_TOPIC_ARN_HERE>'
6 };
7
8 let promiseResult = sns.publish(params).promise();
9
10 promiseResult.then(data => {
11 console.log(data);
12 }).catch(err => {
13 console.error(err, err.stack);
14 });
15 })

Restart your app, and then send a request to it with a message parameter in the body:

1 $ curl -X POST -H "Content-type: application/json" -d "{\"message\": \"You just rece\


2 ived an SMS from SNS!\"}" "localhost:3000/send-sms"

This results in a JSON response from SNS:

1 {
2 ResponseMetadata: { RequestId: '64ad9304-3a97-5d9e-b5f1-cb1e4ed66546' },
3 MessageId: 'eebc8987-5df5-5e18-8647-d19d29b4f6c0'
4 }

And any subscribed numbers should receive an SMS shortly after! As a point of note, this will
come from a sender called 'NOTICE', and if you’ve used any other services that are using a minimal
configuration of SNS, your message might get thrown into the same text chain:
6. AWS Simple Notification Service (SNS) 35

SMS Message

It’ll typically be bundled with Google verification numbers or confirmations of orders online.
Depending on your phone and provider, the message will also include the Display Name from your
Topic:
6. AWS Simple Notification Service (SNS) 36

Note: SNS doesn’t differentiate between SMS and email subscribers with this setup. Since we have
two subscribers, one using an email and one using a phone number - both subscribers will get the
same message on their respective endpoints.
Since there’s no subject field, the email subscribers will receive a generic AWS Notification Message
subject. Just keep this in mind if you support both SMS and email subscribers for your application.
This issue is addressed in the “Mixed Messaging with MessageStructure” section.

HTTP Endpoint

You could set up a service designed to receive the message, in case you want to trigger a background
action, but don’t necessarily care about immediately receiving the result.
Because we don’t want to send the contents of a GET request as an email, let’s create a new topic and
subscription - we’re going to call this one myHTTPTopic, but otherwise it’s the same flow as before.
The subscription is a little trickier without setting up the application on a server - we’re going to
use ngrok instead. If you’re not familiar with ngrok - it can expose your local application to the
public eye. It’s typically used to test out applications, build webhook integrations or send previews
to clients.
Download ngrok⁷ and unzip it. Then, we’ll want to create a new shell session (keep the current one
open as well, we’ll need that to run our app) and navigate to the directory with ngrok in it.
Run ngrok with:

⁷https://ngrok.com/download
6. AWS Simple Notification Service (SNS) 37

1 $ ngrok http 3000

You should see something like:

ngrok

That URL is now exposing your port 3000 to the world. To test that this is working, copy the HTTP
endpoint (highlighted in the above screenshot), and paste it into a browser and append /status to
it (for example: http://25fb73b0.ngrok.io/status). This should take you to the status endpoint of your
application.
We will need to leave ngrok running for the rest of this section - if you face any issues, make sure
that ngrok is still running, and that the URL hasn’t changed. If it does, you will need to start from
this point again.
Much like the email, we need to first add and confirm a subscriber. We’ll do this manually for now
so that we get an understanding of each step of the process, but in production you might want to
build services that can subscribe themselves automatically.
First, we need to add the following code to our Express app. This will log a confirmation URL that
we’re expecting to receive:

1 app.post('/http-subscribe', (req, res) => {


2 let body = '';
3 let payload = '';
4
5 req.on('data', (chunk) => {
6 body += chunk.toString()
7 })
8
9 req.on('end', () => {
10 console.log(body);
11 let payload = JSON.parse(body)
6. AWS Simple Notification Service (SNS) 38

12
13 if (req.header['x-amz-sns-message-type'] == 'SubscriptionConfirmation') {
14 console.log(payload.SubscribeURL);
15 }
16 });
17 });

This code will take in the request that comes into your application, and check to see if it’s a
SubscriptionConfirmation message from SNS.

Run your app, make sure that ngrok is still up, and copy the ngrok URL. Next, let’s move back to
AWS and create a new subscription, selecting the new topic, and setting the protocol as HTTP. Set
the endpoint to the ngrok URL of your new endpoint - it should resemble the following:

Once you’ve created this, you should receive a ‘confirmation request’ to your running node app, like
the following:

1 SNS App is listening on port 3000:


2
3 https://sns.eu-west-2.amazonaws.com/?Action=ConfirmSubscription&TopicArn=arn:aws:sns\
4 :<ID>2:myHTTPTopic&Token=2336412f37fb687f5d51e6e2425f004aec6ce3301ac7dac40aa09e29fea\
5 4ec61b232a223b79066839ab26459cal951b9c03da61d30d44b637fc4638994a8aff1407bdcl3b7272a6\
6 7daa607b015a3b5c42cfffbc77b63f430243880f3adlf90499251a504a33db7c362f052958f600632

Copy and paste this URL into a browser and your new endpoint will be confirmed. You can check if
this was successful by navigating to the ‘Subscriptions’ page in AWS SNS and checking the ‘Status’
of the subscription:
6. AWS Simple Notification Service (SNS) 39

Next, we’re going to add some logic for dealing with actual notifications sent. Let’s update our
http-subscribe endpoint:

1 app.post('/http-subscribe', (req, res) => {


2 let body = '';
3 let payload = '';
4
5 req.on('data', (chunk) => {
6 body += chunk.toString()
7 })
8
9 req.on('end', () => {
10 console.log(body);
11 let payload = JSON.parse(body)
12
13 if (req.header['x-amz-sns-message-type'] == 'SubscriptionConfirmation') {
14 console.log(payload.SubscribeURL);
15 }
16
17 if (req.header['x-amz-sns-message-type'] === 'Notification') {
18 console.log(payload.Message);
19 }
20 });
21 });

To test this, with ngrok and our app still running, let’s manually publish a message through SNS.
Head to ‘Topics’ in AWS SNS, and click on your new HTTP topic. On the following page, click
‘Publish Message’. On the following page scroll down and enter a message:
6. AWS Simple Notification Service (SNS) 40

Then scroll to the bottom and hit ‘Publish Message’. If you come back to the shell session your app
is running in, you should see something like the following:

We’ve just logged the payload’s message in this example, though you can build pretty complex logic
to react differently to different kinds of notifications.
For instance a Payment notification can kick off a payment processing job, or an UpdateDB notification
could update a database record.

Message Templating

Seeing as your message is a string, you could use string interpolation for dynamic input - for example:
6. AWS Simple Notification Service (SNS) 41

1 app.post('/send', (req, res) => {


2 let now = new Date().toString();
3 let email = `${req.body.message} \n \n This was sent: ${now}`;
4 let params = {
5 Message: email,
6 Subject: req.body.subject,
7 TopicArn: '<YOUR_TOPIC_ARN_HERE>'
8 };
9
10 let promiseResult = sns.publish(params).promise();
11
12 promiseResult.then(data => {
13 console.log(data);
14 }).catch(err => {
15 console.error(err, err.stack);
16 });
17 });

Mixed Messaging with MessageStructure

If you have subscribers with differing endpoints - such as SMS and email subscribers, you might
want to send different messages to each.
For example, email messages can have entire templated pages, while SMS subscribers get a link
to a promotion. To achieve this, we’ll use the MessageStructure parameter and change our /send
endpoint a bit:

1 app.post('/send', (req, res) => {


2 let message = {
3 'default' : 'SNS Notification',
4 'email' : 'Hello from SNS on email',
5 'sms' : 'Hello from SNS on SMS'
6 };
7
8 let params = {
9 Message: JSON.stringify(message),
6. AWS Simple Notification Service (SNS) 42

10 MessageStructure: 'json',
11 Subject: req.body.subject,
12 TopicArn: 'arn:aws:sns:us-east-1:867901910243:myStackAbuseTopic'
13 };
14
15 let promiseResult = sns.publish(params).promise();
16
17 promiseResult.then(data => {
18 console.log(data);
19 }).catch(err => {
20 console.error(err, err.stack);
21 });
22 });

If you want to send messages to multiple endpoints, you’ll include MessageStructure in you
parameter list. It accepts a single type - json. If it’s present, the Message accepts a validly stringified
JSON object.
Here, we’ve defined a message JSON object, that has the default value of SNS Notification. Other
than that, it has an email and sms options - depending on the endpoint type, the values from these
will be used.
For email subscribers, the message is Hello from SNS on email, while SMS subscribers get a Hello
from SNS on SMS. When we rerun the app and hit the endpoint with a POST request:

1 $ curl -X POST "localhost:3000/send"

The app responds with:

1 {
2 ResponseMetadata: { RequestId: '39d2fcd3-ce43-50cc-85ad-a4c9b55a7336' },
3 MessageId: 'ce3f9248-f593-5bf4-b4a7-ca528e8a980e'
4 }

And we’ve received different messages, for the same push notification on our topic:
6. AWS Simple Notification Service (SNS) 43

Lambda Endpoint

In a similar vein, you could use these messages to trigger and hand inputs to Lambda functions. This
might kick off a processing job, for example. AWS Lambda is covered in a later chapter.

SQS Endpoint

With the SQS endpoint type, you could put messages into queues to build out event-driven architec-
tures - AWS SQS is covered in the next chapter.
7. AWS Simple Queue Service (SQS)
With the increased complexity of modern software systems, came along the need to break up systems
that had outgrown their initial size. This increase in the complexity of systems made it harder to
maintain, update, and upgrade them.
This paved the way for microservices that allowed massive monolithic systems to be broken down
into smaller services that are loosely coupled but interact to deliver the total functionality of the
initial monolithic solution.
It is in these microservice architectures, Queueing Systems come in handy to facilitate the commu-
nication between the separate services that make up the entire setup.
In this chapter, we will dive into AWS Simple Queue Service (SQS) and demonstrate how we can
leverage its features in a microservice environment.

What is Message Queueing?


Before the internet and email came into the picture, people over long distances communicated mostly
through the exchange of letters. The letters contained the messages to be shared and were posted at
the local post office station from where they would be transferred to the recipient’s address.
This might have differed from region to region but the idea was the same. People entrusted
intermediaries to deliver their messages for them, as they went ahead with their lives.
When a system is broken down into smaller components or services that are expected to work
together, they will need to communicate and pass around information from one service to another,
depending on the functionality of the individual services.
Message queueing facilitates this process by acting as the “post office service” for microservices.
Messages are put in a queue and the target services pick up and act on the ones addressed to them.
The messages can contain anything - such as instructions on what steps to take, the data to act upon
or save, or asynchronous jobs to be performed.
Message queueing is a mechanism that allows components of a system to communicate and
exchange information in an asynchronous manner. This means that the loosely coupled systems
do not have to wait for immediate feedback on the messages they send and they can be freed up
to continue handling other requests. When the time comes and the response is required, the service
can look for the response in the message queue.
Here are some examples of popular message queues or brokers:

• Amazon Simple Queue Service⁸ - Which is the focus of this chapter.


⁸https://aws.amazon.com/sqs/
7. AWS Simple Queue Service (SQS) 45

• RabbitMQ⁹ - Open-source and provides asynchronous messaging capabilities.


• Apache Kafka¹⁰ - Distributed streaming platform that supports the pub/sub mode of interac-
tion.
• Others include Apache RocketMQ¹¹, NSQ¹², and HornetQ¹³.

Use-Cases of Message Queueing


Message queues are not needed for every system out there, but there are certain scenarios in which
they are worth the effort and resources required to set up and maintain. When utilized appropriately,
message queues are advantageous in several ways.
First, message queues support the decoupling of large systems by providing the communication
mechanism in a loosely-coupled system.
Redundancy is bolstered through the usage of message queues by maintaining the state in case a
service fails. When a failed or faulty service resumes operations, all the operations it was meant to
handle will still be in the queue and it can pick them up and continue with the transactions, which
could have been otherwise lost.
Message queueing facilitates batching of operations such as sending out emails or inserting records
into a database. Batch instructions can be saved in a queue and all processed at the same time in
order, instead of being processed one by one.
Queueing systems can also be useful in ensuring the consistency of operations by ensuring that
they are executed in the order they were received. This is especially important when particular
components or services of a system have been replicated in order to handle an increased load. This
way, the system will scale well to handle the load and also ensure that processed transactions are
consistent and in order since all the replicated services will be fetching their instructions from the
message queue that will act as the single source of “truthful” information.

Amazon Simple Queue Service - SQS


AWS Simple Queue Service (SQS) is a message queueing solution that is distributed and fully
managed by Amazon. SQS allows us to send and receive messages or instructions between software
components enabling us to implement and scale microservices in our systems without the hassle of
setting up and maintaining a queueing system ourselves.
Like other AWS services, SQS scales dynamically based on demand while ensuring the security of
the data passed through (optional) encryption of the messages.
To explore the Amazon Simple Queue Service, we will create a decoupled system in Node.js, in which
each component will interact with the others by sending and retrieving messages from SQS.
⁹https://www.rabbitmq.com/
¹⁰https://kafka.apache.org/
¹¹https://rocketmq.apache.org/
¹²https://nsq.io/
¹³https://hornetq.jboss.org/
7. AWS Simple Queue Service (SQS) 46

Imagine that we are a small organization that doesn’t have the bandwidth to handle orders as they
come in. We have one service to receive user’s orders and another that will deliver all orders posted
that day to our email inbox at a certain time of the day for batch processing.
All orders will be stored in the queue until they are collected by our second service and delivered to
our email inbox.
Our microservices will consist of simple Node.js APIs - one that receives the order information from
users and another that sends confirmation emails to the users.
Sending email confirmations asynchronously through the messaging queue will allow our orders
service to continue receiving orders despite the load since it does not have to worry about sending
the emails.
Also, in case the mail service goes down, once brought back up, it will continue dispatching emails
from the queue, therefore, we won’t have to worry about lost orders.

Setting up an SQS Queue


For this project, we’ll install the AWS CLI tool in order to interact with our AWS resources from our
local machines. Instructions to install the AWS CLI tool on multiple platforms can be found here¹⁴.
Let’s configure the AWS CLI by running:

1 $ aws configure

We will get a prompt to fill in our Access Key ID, Secret Access Key, and default regions and
output formats. The last two are optional but we will need the access key and secret.
Note: You can perform this step via the credentials/config files through an IAM user, or by simply
giving access to AWS like in the last two chapters. This is just another option.
With our AWS account up and running, and the AWS CLI configured, we can set up our AWS Simple
Queue Service by navigating to the SQS home page:
¹⁴https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html
7. AWS Simple Queue Service (SQS) 47

Here, we’ve specified the queue name, followed by the queue type - nodeshop.fifo. We’ll want
to use a FIFO queue for this purpose, since we want the orders to be processed in the same order
they came in. In the case of a Standard Queue, it’ll try its best to maintain the order, but it’s not
guaranteed.
Here’s a visual representation of the difference between these two queues:

The Standard Queue is better suited for projects that prioritize throughput over the order of events.
A FIFO Queue is better suited for projects that prioritize the order of events.
Once we have chosen the kind of queue we require, let’s specify some options for our queue, such as
the message retention period (how long the messages are kept in the queue before being deleted), the
7. AWS Simple Queue Service (SQS) 48

visibility timeout (time in which the event accessed by a consumer is invisible to other consumers)
or the maximum size:

We’ll leave this options as default, since they’re well-suited for most cases. Other than that, we can
set the access policy, encryption, tags and the dead-letter queue.
The former three are standardly optional and common for AWS services. The dead-letter queue
(DLQ) is a queue that contains the messages from the queue that couldn’t be consumed by a
consumer. Essentially, it can be used to identify which messages from the queue are problematic,
isolate them and modify the code if need be.
By default, this option is Disabled, though you can activate it with the click of a button.
When you create the queue, make note of the queue’s URL:

This URL will be used as the contact-point from our application.


With our queue ready, we can now create our Node.js APIs that will read from and write to it. Let’s
set up our folder structure and initialize the project:
7. AWS Simple Queue Service (SQS) 49

1 $ mkdir nodeshop_apis
2 $ cd nodeshop_apis
3 $ mkdir orderssvc emailssvc
4 $ npm init -y

We’ve created our project directory, inside of which we have a directory for our Orders Service and
Emails Service, and initialized a Node project with the default settings.

Setting up the Node APIs

We will build the Orders service first since it is the one that receives the orders from the users and
posts the information onto the queue. Our Emails service will then read from the queue and dispatch
the emails.
Same as last time, we’ll use Express to bootstrap an application. We will also install the body-parser
middleware to handle and validate request data:

1 $ npm install express body-parser --save

Since we will have multiple services that will be running at the same time, we will also install
the npm-run-all package to help us start up all our services at the same time and not have to run
commands in multiple terminal windows:

1 $ npm install npm-run-all --save

With npm-run-all installed, let us now tweak the scripts entry in our package.json file to include
the commands to start our services and one command to run them all:

1 {
2 // Truncated for brevity...
3 "scripts": {
4 "start-orders-svc": "node ./orderssvc/index.js 8081",
5 "start-emails-svc": "node ./emailssvc/index.js",
6 "start": "npm-run-all -p -r start-orders-svc"
7 },
8 // ...
9 }

We will add the start-orders-svc and start-emails-svc commands to run our Orders and Emails
services respectively. We will then configure the start command to execute them both using
npm-run-all. Currently, the start command only runs the start-orders-svc script, since that’s
the only one that’ll exist at the time we run it.
The start-emails-svc will be added to this call as soon as it’s created. It’s also worth noting that
we’ve added a command-line argument for the port on the order service script.
With this setup, running all our services will be as easy as executing the following command:
7. AWS Simple Queue Service (SQS) 50

1 $ npm start

Let’s create an index.js file in the orderssvc directory:

1 $ touch index.js

And then we can create our orders API within it:

1 const express = require('express');


2 const bodyParser = require('body-parser');
3
4 const port = process.argv.slice(2)[0];
5 const app = express();
6
7 app.use(bodyParser.json());
8
9 app.get('/index', (req, res) => {
10 res.send('Welcome to NodeShop Orders.')
11 });
12
13 console.log(`Orders service listening on port ${port}`);
14 app.listen(port);

Here, we get the port number from the scripts inside of package.json, and run a standard Express
app. The /index endpoint will respond by simply sending a welcome message.
We’ll start the app by running the npm start command and interact with our APIs using Postman:
7. AWS Simple Queue Service (SQS) 51

postman get request

Or you can use curl:

1 $ curl localhost:8081/index

Which also returns:


7. AWS Simple Queue Service (SQS) 52

1 Welcome to NodeShop Orders.

We will implement the Emails service later on. For now, our Orders service is set up and we can
now implement our business logic.

Orders Service

The Orders service will receive orders via a route and controller that handles the input. It’ll process
the input and write it to the SQS queue, where the order will be stored until it’s called upon to be
processed at a later date.
Before implementing the controller, let’s install the AWS SDK for this service:

1 $ npm i aws-sdk --save

In the already existing index.js file, let’s set up AWS SQS:

1 // ./orderssvc/index.js
2
3 // Code removed for brevity...
4
5 // Import the AWS SDK
6 const AWS = require('aws-sdk');
7
8 // Configure the region
9 AWS.config.update({region: 'us-east-1'});
10
11 // Create an SQS service object
12 const sqs = new AWS.SQS({apiVersion: '2012-11-05'});
13 const queueUrl = 'SQS_QUEUE_URL';

We’ll be adding a new endpoint after this. Our new /order endpoint will receive a payload that
contains the order data and send it to our SQS queue using the AWS SDK:
7. AWS Simple Queue Service (SQS) 53

1 app.post('/order', (req, res) => {


2 // Extract order data from the request
3 let orderData = {
4 'userEmail': req.body['userEmail'],
5 'itemName': req.body['itemName'],
6 'itemPrice': req.body['itemPrice'],
7 'itemsQuantity': req.body['itemsQuantity']
8 }
9
10 // Construct SQS message data from our order
11 let sqsOrderData = {
12 MessageAttributes: {
13 'userEmail': {
14 DataType: 'String',
15 StringValue: orderData.userEmail
16 },
17 'itemName': {
18 DataType: 'String',
19 StringValue: orderData.itemName
20 },
21 'itemPrice': {
22 DataType: 'Number',
23 StringValue: orderData.itemPrice
24 },
25 'itemsQuantity': {
26 DataType: 'Number',
27 StringValue: orderData.itemsQuantity
28 }
29 },
30 // Stringify the JSON contents
31 MessageBody: JSON.stringify(orderData),
32 // Ensure the message sent to this email isn't a duplicate
33 MessageDeduplicationId: req.body['userEmail'],
34 MessageGroupId: 'UserOrders',
35 // Set the URL to our queue
36 QueueUrl: queueUrl
37 };

The AWS SDK requires us to build a payload object specifying the data we are sending to the queue,
in our case we define it as sqsOrderData. AWS’ MessageAttributes contains a information about
the message’s attributes, in JSON format.
Each message consists of a Name, Type and Value. For example, a userEmail is of String type and
holds the orderData.userEmail value.
7. AWS Simple Queue Service (SQS) 54

Once all message attributes are defined, we stringify the JSON contents and store it as the MessageBody.
This is the body of the message we send off to SQS. Additionally, we set the MessageDeduplicationId,
which is the token used used for deduplication. If a message if sent successfully, the MessageDeduplicationId
is saved. In the next five minutes, you can’t send another message with that same MessageDeduplicationId.
Here, we’ve set the id as the user email.
Once we’ve added all the message attributes we need for our application to process the order, we
can go ahead and send the message to the queue:

1 // Send the order data to the SQS queue


2 let sendSqsMessage = sqs.sendMessage(sqsOrderData).promise();
3
4 sendSqsMessage.then((data) => {
5 console.log(`OrdersSvc | SUCCESS: ${data.MessageId}`);
6 res.send('Thank you for your order. Check you inbox for the confirmation ema\
7 il.');
8 }).catch((err) => {
9 console.log(`OrdersSvc | ERROR: ${err}`);
10 res.send('We ran into an error. Please try again.');
11 });
12 });

The sendMessage() function will send our message to the queue using the credentials we used to
configure the AWS CLI (or another configuration approach). Finally, we wait for the response and
notify the user that their order has been received successfully and that they should check for the
email confirmation.
To test the Orders service, we run npm start and send the following payload to localhost:8081/order:

1 {
2 "itemName": "Phone case",
3 "itemPrice": "10",
4 "userEmail": "[email protected]",
5 "itemsQuantity": "2"
6 }
7. AWS Simple Queue Service (SQS) 55

From the returned message on the user-end:

1 Thank you for your order. Check you inbox for the confirmation email.

As well as the output of our app on the server-end:

1 Orders service listening on port 8081


2 OrdersSvc | SUCCESS: 6bef302d-9a36-45d5-83c5-8598d18dc7a1

It looks like our order has been processed successfully. Let’s take a look at the SQS dashboard:
7. AWS Simple Queue Service (SQS) 56

On the top right, go to “Send and receive messages”:

This is where you can send messages manually if you’d like.


At the bottom of the page, within “Receive Messages”, select “Poll for messages”, which will load in
the messages in the queue. After that, you can open the message we’ve sent:
7. AWS Simple Queue Service (SQS) 57

Here, you can see the information we’ve provided regarding the order, as well as some additional
data about the message itself. Our Orders service has been able to receive a user’s order and
successfully send the data to our SQS queue.

Email Service

Our Orders service is ready and already receiving orders from users. The Emails service is respon-
sible for reading the messages stored in the queue and dispatching confirmation emails to the users.
This service is not notified when orders are placed and therefore has to keep checking the queue for
any new orders.
To ensure that our Emails service is continually checking for new orders we will use the sqs-consumer
library that will continually and periodically check for new orders and dispatch the emails to the
users. sqs-consumer will also delete the messages from the queue once it has successfully read them
from the queue.
Note: If a consumer doesn’t consume a message from SQS within the “holding period”, or rather, the
message retention time, it’ll be deleted.
We’ll start by installing the sqs-consumer library:
7. AWS Simple Queue Service (SQS) 58

1 $ npm i sqs-consumer --save

To send emails, we’ll use the nodemailer library, which also has to be installed:

1 $ npm i nodemailer --save

Then, let’s create a new index.js file for this service in the emailssvc directory:

1 $ touch index.js

And within this file, let’s first set up the AWS SDK and require the sqs-consumer:

1 const AWS = require('aws-sdk');


2 const { Consumer } = require('sqs-consumer');
3 const nodemailer = require('nodemailer');
4
5 // Configure the region
6 AWS.config.update({region: 'us-east-1'});
7
8 const queueUrl = 'SQS_QUEUE_URL';

You can use any email provider for your service. For example, we’ll be using Gmail. You’ll need to
provide the email address and password of the account you’re sending from in the auth object:

1 // Configure Nodemailer to user Gmail


2 let transport = nodemailer.createTransport({
3 service: 'gmail',
4 port: 587,
5 auth: {
6 user: 'Email address',
7 pass: 'Password'
8 }
9 });

Depending on your provider, you might have to explicitly allow this app to log in. For example, in
Gmail you must set the option “Allow less secure apps” to “On”, otherwise, the login will be blocked.
With nodemailer ready to go, let’s define a sendMail() function:
7. AWS Simple Queue Service (SQS) 59

1 function sendMail(message) {
2 let sqsMessage = JSON.parse(message.Body);
3 const emailMessage = {
4 from: 'sender_email_adress', // Sender address
5 to: sqsMessage.userEmail, // Recipient address
6 subject: 'Order Received | NodeShop', // Subject line
7 html: `<p>Hi ${sqsMessage.userEmail}.</p. <p>Your order of ${sqsMessage.item\
8 sQuantity} ${sqsMessage.itemName} has been received and is being processed.</p> <p> \
9 Thank you for shopping with us! </p>` // Plain text body
10 };
11
12 return new Promise((resolve, reject) => {
13 transport.sendMail(emailMessage, (err, info) => {
14 if (err) {
15 console.log(`EmailsSvc | ERROR: ${err}`);
16 return reject(err);
17 } else {
18 console.log(`EmailsSvc | INFO: ${info.response}`);
19 return resolve(info);
20 }
21 });
22 });
23 }

The sendMail() function starts off by accepting the message from the queue. It parses the JSON
contents of the message’s Body. This is the same message we saw in the SQS dashboard a bit back.
Then, we’ll construct our own emailMessage for the user.
Here, you can put any email address you’d like to send from, such as [email protected], or any
other address you have access to. We’ll be sending the email to the userEmail, extracted from the
sqsMessage. Finally, we set the subject and html body of the email, which contains information
about the order.
Using nodemailer’s transport, we then send the email to our user/customer. Now that we can use
an SQS message and send an email to the user it’s tied to, let’s use the sqs-consumer to read from
the queue, and invoke this function on each message retrieved from the queue.
We’ll create a Consumer instance, using the queueUrl. It accepts the handleMessage function, which
is called whenever a message is received:
7. AWS Simple Queue Service (SQS) 60

1 // Create our consumer


2 const app = Consumer.create({
3 queueUrl: queueUrl,
4 handleMessage: async (message) => {
5 await sendMail(message);
6 },
7 sqs: new AWS.SQS()
8 // batchSize: 10
9 });
10
11 app.on('error', (err) => {
12 console.error(err.message);
13 });
14
15 app.on('processing_error', (err) => {
16 console.error(err.message);
17 });
18
19 console.log('Emails service is running');
20 app.start();

We’ve created a new sqs-consumer application by using the Consumer.create() function and
provided the query URL and the function to handle the messages fetched from the SQS queue. When
a message is received, we’ve decided to call sendMail() on that message.
Here, you can also specify the batchSize, which defines in which batches should the messages be
handled in. By default, each message will get handled as it arrives. If you set a batchSize, a certain
amount of them will have to accumulate before they get processed.
Our Emails service is now ready. To integrate it into our execution script, we will simply modify
the scripts option in our package.json:

1 {
2 // Truncated for brevity...
3 "scripts": {
4 "start-orders-svc": "node ./orderssvc/index.js 8081",
5 "start-emails-svc": "node ./emailssvc/index.js",
6 // Update this line
7 "start": "npm-run-all -p -r start-orders-svc start-emails-svc"
8 },
9 // ...
10 }

Running our services with:


7. AWS Simple Queue Service (SQS) 61

1 $ npm start

Will greet us with:

1 Emails service is running


2 Orders service listening on port 8081

As soon as we send a payload to localhost:8081/order, such as:

1 {
2 "itemName": "Phone case",
3 "itemPrice": "5",
4 "userEmail": "[email protected]",
5 "itemsQuantity": "3"
6 }

We’ll be greeted with:

1 OrdersSvc | SUCCESS: 34eac63d-cb55-44f9-ba67-092507c62b1d

Followed by:

1 EmailsSvc | INFO: 250 2.0.0 OK 1604365997 t7sm23066452wrx.42 - gsmtp

This means that our Emails service picked up the order from the queue, sent out the email and got
a 2.0.0 OK response from the email service provider.
If we check the inbox of the user we’ve sent in the payload, we’ll see the confirmation email:
7. AWS Simple Queue Service (SQS) 62
8. Pairing SNS and SQS Together
In this chapter, we’ll reflect on the past two chapters and build an application that pairs together
AWS SNS and AWS SQS to process orders from an online shop.

Demo Application
For the demo project, we will enhance the Node Shop project that we built in the previous chapter
and instead use SNS to send the notification emails to users.
In the existing project, we set an SQS queue that received an order’s details via the Orders service
and sent our emails via the Emails service which picked up the orders from the queue that we setup
on SQS. In that case, we had to implement our own email messaging functionality to notify users
that we received their orders. This can be replaced with SNS, as it naturally pairs with SQS and
replaces the need for us to implement our own email handling system.
An added advantage is that we can allow our users to use their phone numbers instead of emails
when placing orders and we won’t have to implement a separate service to handle SMS delivery. Also,
depending on which type of application you’re running, you might prefer using phone numbers as
a sort of identification.
Also, remember from the SNS chapter - you don’t have to subscribe a phone number in advance,
like emails, and you can target individual phone numbers to send messages to.

Implementation
We will start by setting up a NodeShopTopic on SNS, just as we set up the topic in the SNS chapter:
8. Pairing SNS and SQS Together 64

Our previous project had an Orders service that received the details of a user’s order and an Emails
service that picked these details off the queue and dispatched the emails to the users.
Since we’ll be working with phone numbers, instead of emails, let’s update the /order endpoint.
We’ll want to extract the userPhone from the request, and pack it into a variable, amongst other
order data:

1 app.post('/order', (req, res) => {


2
3 let orderData = {
4 // Adding the `userPhone`
5 'userPhone': req.body['userPhone'],
6 'itemName': req.body['itemName'],
7 'itemPrice': req.body['itemPrice'],
8 'itemsQuantity': req.body['itemsQuantity']
9 }

Then, using this orderData, we’ll construct an sqsOrderData object:


8. Pairing SNS and SQS Together 65

1 let sqsOrderData = {
2 MessageAttributes: {
3 // `userPhone` as a MessageAttribute
4 'userPhone': {
5 DataType: 'String',
6 StringValue: orderData.userPhone
7 },
8 'itemName': {
9 DataType: 'String',
10 StringValue: orderData.itemName
11 },
12 'itemPrice': {
13 DataType: 'Number',
14 StringValue: orderData.itemPrice
15 },
16 'itemsQuantity': {
17 DataType: 'Number',
18 StringValue: orderData.itemsQuantity
19 }
20 },
21 MessageBody: JSON.stringify(orderData),
22 // Changed from `userEmail` to `userPhone`
23 MessageDeduplicationId: req.body['userPhone'],
24 MessageGroupId: 'UserOrders',
25 QueueUrl: queueUrl
26 };

Finally, we’ll call the sendMessage() function on the sqs object, with this sqsOrderData:

1 // Send the order data to the SQS queue


2 let sendSqsMessage = sqs.sendMessage(sqsOrderData).promise();
3
4 sendSqsMessage.then((data) => {
5 console.log(`OrdersSvc | SUCCESS: ${data.MessageId}`);
6 res.send('Thank you for your order. Check you phone for an SMS with the conf\
7 irmation details.');
8 }).catch((err) => {
9 console.log(`OrdersSvc | ERROR: ${err}`);
10 res.send('We ran into an error. Please try again.');
11 });
12 });

Much like in the previous chapter, we’ve used the phone number to identify the messages in the
8. Pairing SNS and SQS Together 66

queue. We’ll also use this same phone number to send a confirmation SMS message using SNS.
To this end, we’ll have an SMS service which accepts a request, and simply passes it on to the SNS
topic we’ve recently created. It will contain an sqs-consumer which is used to access the queue from
SQS. Once a new order is received and passed along, the SMS service quickly publishes it to the
NodeShopTopic.

If you’re working in a new project, you’ll have to install the dependencies from the previous chapter
again. In any case, we’ll create a new file to host our code for the SMS service. Let’s start off with
importing the required packages:

1 const AWS = require('aws-sdk');


2 const { Consumer } = require('sqs-consumer');

Then, we’ll define the main function of this service - the sendSMS() function:

1 // Configure the region


2 AWS.config.update({region: 'us-east-1'});
3 const sns = new AWS.SNS({region: 'us-east-1'});
4
5 function sendSMS(message) {
6 let msg = JSON.parse(message.Body);
7
8 let textMsg = `Hi ${msg.userPhone}. Your order of ${msg.itemsQuantity} ${msg.ite\
9 mName} has been received and is being processed. Thank you for shopping with us!`
10
11 let params = {
12 Message: textMsg,
13 Subject: 'Order Received | NodeShop',
14 PhoneNumber: msg.userPhone,
15 };
16
17 let promiseResult = sns.publish(params).promise();
18
19 promiseResult.then(data => {
20 console.log(`Message sent: ${data.MessageId}`)
21 }).catch(err => {
22 console.error(err, err.stack);
23 });
24 }

It accepts a message from SQS and parses the JSON contents of it. We construct an update message
for the user and set up a couple of parameters that we then fed into the sns instance, publishing the
message.
8. Pairing SNS and SQS Together 67

Now, this just handles what happens when the SQS message arrives. To receive a message from SQS,
we’ll use the sqs-consumer instance to consume information from the queue. This message can then
be passed on to our sendSMS() function:

1 const queueUrl = 'SQS_QUEUE_URL';


2
3 const app = Consumer.create({
4 queueUrl: queueUrl,
5 handleMessage: async (message) => {
6 await sendSMS(message);
7 },
8 sqs: new AWS.SQS(),
9 batchSize: 10
10 });

This is done in batches of 10. The only thing left is to add a couple of event handlers if any exceptions
arise, and start the application up:

1 app.on('error', (err) => {


2 console.error(err.message);
3 });
4
5 app.on('processing_error', (err) => {
6 console.error(err.message);
7 });
8
9 console.log('SMS service is running');
10 app.start();

If we had used email addresses instead, we would first have to add the emails as subscribers to the
SNS topic. All subscribers get the same message, once a topic publishes it. This means that on each
purchase, all previous customers would get the order information as well.
Using SMS, we can target a single individual. Let’s update our scripts from the package.json file to
include this new SMS service:

1 "scripts": {
2 "start-orders-svc": "node ./orderssvc/index.js 8081",
3 "start-sms-svc": "node ./smssvc/index.js",
4 "start": "npm-run-all -p -r start-orders-svc start-sms-svc"
5 }

Now, let’s run the service:


8. Pairing SNS and SQS Together 68

1 $ npm start

And let’s send an order in JSON format:

1 {
2 "itemName": "Phone cases",
3 "itemPrice": "10",
4 "userPhone": "+2547...",
5 "itemsQuantity": "2"
6 }

The phone number requires us to put a + sign and the country code in the payload. Let’s take a look
at the terminal to check how things have been going:
8. Pairing SNS and SQS Together 69

1 $ npm start
2
3 > [email protected] start
4 > npm-run-all -p -r start-orders-svc start-sms-svc
5
6 > [email protected] start-orders-svc
7 > node ./orderssvc/index.js 8081
8
9 > [email protected] start-sms-svc
10 > node ./smssvc/index.js
11
12 Orders service listening on port 8081
13 SMS service is running
14
15 # Order has been submitted successfully
16 OrdersSvc | SUCCESS: e77434ea-6452-4078-93e2-a0d7eed18631
17 # Message has been sent and accepted by SNS
18 Message sent: 4a278bba-dda3-57a9-9f2e-d3a1a2ea6585

Here, both services are run at the start, due to the updated scripts in the package.json file. Once
the order has been placed, in our case, via Postman - it’s been sent off to the SQS queue. Our SMS
service reads from this queue, and sends a message to the SNS topic, informing the user of their
purchase via SMS:
8. Pairing SNS and SQS Together 70
9. Database Support
Nowadays, data is enjoying more value than it ever did in the past. This trend will likely continue
into the future. In fact, “enjoying more value” is an understatement.

The world as we know it is data-driven.

Needless to say, AWS offers a fair bit of database support. They offer relational, key-value, in-
memory, document, wide column, graph, time series and ledger databases. Each of these have their
own use-cases and applications.
Though, relational databases are still the most prevalent type in the world. They’re used for tradi-
tional applications, enterprise resource planning (ERP), customer relationship management (CRM),
e-commerce applications, etc.
For your relational database needs, Amazon offers:

• Amazon Aurora
• Amazon RDS
• Amazon Redshift

Amazon Aurora is a database engine supporting MySQL and PostgreSQL. While Amazon RDS
supports a wider variety - MySQL, MariaDB, PostgreSQL, Oracle, Microsoft SQL Server, if you’re
using RDS with Aurora, you’ll be limited to MySQL and PostgreSQL.
Aurora and RDS are optimized to work together, and this is a really common combination.
The benefits of using a cloud database service over hosting a database on your own server mostly
counts the same benefits as services such as file hosting.
It’s easier to scale cloud databases by simply adding more nodes and thus scaling either vertically or
horizontally. Storing more data boils down to paying more for the new data you introduce, without
a step-like investment plan. You pay as you go.
Security is also another huge benefit. When you host the data - you take care of that data. You
implement the security measures. This means that instead of implementing the functionalities of
your application - you spend time finding ways to protect your application from potential attackers
and threats:
9. Database Support 72

Any transfer of data between different services is prone to attacks. Cross-Site Scripting and Man in
The Middle attacks can originate from the end-user. SQL Injections can occur while your application
is exchanging information with the database, and these attacks originate from the end user. Without
proper processing, you can give your users unwanted access or privilege abuse over your database.
And a really common attack is a classic Denial of Service attack, that doesn’t really even require
you to make a flaw in the security protocols - it’s enough to simply overload the capabilities of the
database, or the entire server.
With a cloud database, you leave these issues to your service provider to deal with. Any reputable
service provider will have already handled the vast majority of security issues and concerns that
you now don’t have to.
You can rest assured that the data is secure and protected as huge players such as AWS can’t afford
to not have the world’s best security for their storage:
9. Database Support 73

Of course, this doesn’t mean that you can have a hands-off approach to security. User-originating
attacks can still make their way into certain parts of your application if you’re not too careful. What
this does mean is that you won’t have to take care of most of these attacks, at the pain points where
they can cause the most damage.
10. AWS Relational Database Service
(RDS)
It’s not an overstatement to say that information and data runs the world. Almost any application,
from social media and e-commerce websites to simple time trackers and drawing apps, relies on the
very basic and fundamental task of storing and retrieving data in order to run as expected.
Amazon’s Relational Database Service¹⁵ (RDS) provides an easy way to get a database set up in
the cloud using any of a wide range of relational database technologies. It also introduces easy
administration through their management console and fast performance.
In this section, we’re going to set up a database on RDS, and store data on it with a Node application.

Setting Up an RDS Instance


First, we’re going to create our RDS instance cluster. Head to AWS and log in.
Once you’re logged in, click on ‘Services’ in the top left, and then search for ‘RDS’. You’ll be presented
with a page that looks something like this:

setting_up_rds_instance

On the menu on the left, select ‘Databases’. This would normally display a list of RDS instance
clusters that we’ve created, but we don’t have any yet.
To create one, click the orange ‘Create database’ button. You should be presented with a page that
looks like:
¹⁵https://aws.amazon.com/rds/
10. AWS Relational Database Service (RDS) 75

create_rds_database

AWS has recently introduced an ‘Easy create’ method for creating new RDS instances, so let’s use
that.
Under ‘Engine type’ we’ll use ‘Amazon Aurora’, which is Amazon’s own database engine optimized
for RDS. For the edition, we’ll leave this set to ‘Amazon Aurora with MySQL 5.6 compatibility’.
Under ‘DB instance size’ select the ‘Dev/Test’ option - this is a less powerful (and cheaper) instance
type, but is still more than enough for what we need it for.
The ‘DB cluster identifier’ is the name of the database cluster that we’re creating. Let’s call ours
my-node-database for now.

For the master username, leave it as admin. Finally, we have the option to have a master password
generated automatically. For the ease of this tutorial, let’s set our own.
Make sure it’s secure as this is the master username and password!
10. AWS Relational Database Service (RDS) 76

Finally, scroll down and click ‘Create database’. It takes a few minutes to fully provision an RDS
instance:

create_rds_database

Before getting started on our Node application, we need to make sure we can connect to the instance.
Select the instance you just created (it’ll be the option that ends in instance-1) and take a note of
the value under ‘Endpoint’:

On the right-hand side, under ‘VPC security groups’, click the link - this will take you to the security
group that’s been set up for the database. Security groups are essentially firewall rules as to who is
and isn’t allowed to make connections to a resource.
Currently, this one is set to only allow connections from resources that have the same security group.
Selecting the ‘Actions’ drop down at the top, navigate to ‘Edit inbound rules’. In this dialog, click
‘Add rule’. For the new rule, under ‘Type’, select ‘All traffic’. Under ‘Source’, select ‘Anywhere’.
You should end up with something that looks like this:
10. AWS Relational Database Service (RDS) 77

Add another rule, that allows traffic from Anywhere (0.0.0.0/0 meaning anywhere).
This dashboard is actually the EC2 dashboard, as RDS is built on top of EC2. You’re basically editing
the inbound rules for the entire instance, not just RDS. This isn’t too relevant for now, and is covered
in the chapter on EC2.

NOTE: Even though you’ve set the inbound and outbound rules, the RDS instance most
likely still isn’t public.

To make it public, navigate to the Connectivity and Security panel, and check if the Public Accessi-
bility is turned on:

If not, select Modify at the top, and under Connectivity -> Additional connectivity configuration,
turn it on:
10. AWS Relational Database Service (RDS) 78

Your RDS instance should now be ready to go! Let’s write some code to interact with it.

Demo Application
In order to interact with our database, we’re going to create a an API that allows us to store user
profiles via Express. Before we do that, we need to create a table inside our RDS instance to store
data in.
Let’s create a folder, move into it and initialize a blank Node.js application with the default
configuration:

1 $ mkdir node-rds
2 $ cd node-rds
3 $ npm init -y

Then, let’s install the required dependencies:


10. AWS Relational Database Service (RDS) 79

1 $ npm i express --save


2 $ npm i mysql --save

And finally, we want to create two JavaScript files - one of them will be our Express app, the other
will be a single-use script to create a table in our database:

1 $ touch index.js
2 $ touch dbseed.js

Table Creation Script

Let’s start off with the dbseed.js file:

1 const mysql = require('mysql');


2
3 const con = mysql.createConnection({
4 host: '<DB_ENDPOINT>',
5 port: 3306,
6 user: 'admin',
7 password: '<DB_PASSWORD>'
8 });
9
10 con.connect(function(err) {
11 if (err) throw err;
12 console.log('Connected!');
13 con.end();
14 });

Make sure to swap out <DB_ENDPOINT> for the endpoint that we noted down earlier, and fill in the
password. What this piece of code will do is attempt to connect to the database - if it succeeds, it’ll
run an anonymous function that logs ‘Connected!’, and then immediately close the connection.
We can quickly check to see if it’s properly set up by running:

1 $ node dbseed.js

We’re greeted with the message:

1 Connected!

If the message wasn’t returned, there’s likely an issue with the security settings - go back to the RDS
set-up and make sure you’ve done everything correctly.
Now that we know that we can definitely connect to our database, we’ll want to create a table. Let’s
modify our anonymous function:
10. AWS Relational Database Service (RDS) 80

1 con.connect(function(err) {
2 if (err) throw err;
3 con.query('CREATE DATABASE IF NOT EXISTS main;');
4 con.query('USE main;');
5 con.query('CREATE TABLE IF NOT EXISTS users(id int NOT NULL AUTO_INCREMENT, user\
6 name varchar(30), email varchar(255), age int, PRIMARY KEY(id));', function(error, r\
7 esult, fields) {
8 console.log(result);
9 });
10 con.end();
11 });

The expected output should look something like:

1 OkPacket {
2 fieldCount: 0,
3 affectedRows: 0,
4 insertId: 0,
5 serverStatus: 2,
6 warningCount: 0,
7 message: '',
8 protocol41: true,
9 changedRows: 0
10 }

Now that we’ve got a table to work with, let’s set up the Express app to insert and retrieve data from
our database.

Create/Insert Endpoint

Let’s set up the boilerplate code for our Express app and define a POST request handler that we’ll use
to create users, based on the information from the request:

1 const express = require('express');


2 const app = express();
3 const mysql = require('mysql');
4
5 const port = 3000;
6
7 const con = mysql.createConnection({
8 host: '<DB_ENDPOINT>',
9 port: 3306,
10. AWS Relational Database Service (RDS) 81

10 user: 'admin',
11 password: '<DB_PASSWORD>'
12 });
13
14 app.post('/users', (req, res) => {
15 if (req.query.username && req.query.email && req.query.age) {
16 console.log('Request received');
17 con.connect(function(err) {
18 con.query(`INSERT INTO main.users (username, email, age) VALUES ('${req.\
19 query.username}', '${req.query.email}', '${req.query.age}')`, function(err, result, \
20 fields) {
21 if (err) res.send(err);
22 if (result) res.send({username: req.query.username, email: req.query\
23 .email, age: req.query.age});
24 if (fields) console.log(fields);
25 });
26 });
27 } else {
28 console.log('Missing a parameter');
29 }
30 });
31
32 app.listen(port, () => console.log(`RDS App listening on port ${port}!`));

Here, we set up the Express app to run on port 3000, and have a single request handler. It parses
the request, and extracts information such as the username, email and age from it. We then use the
MySQL connection to create a query and insert this info into the database.
If all is well, the results are sent back in the response for validation purposes.
Let’s check if this works. Start up the application:

1 $ node index.js

And then, let’s fire a POST request to our server, creating a user, using Postman:
10. AWS Relational Database Service (RDS) 82

postman_request_to_rds

Our app logs that there has been a request:

1 Request received

And we got the correct response on Postman:

1 {
2 "username": "testing",
3 "email": "[email protected]",
4 "age": "25"
5 }

Since we’ve received the same data back, our function is working fine. With the functionality of
adding users complete, let’s go ahead and retrieve users from the database.

Retrieve/Get Users Endpoint

Let’s devise a simple GET endpoint for a more user-friendly way to check for results. Below the POST
request handler, let’s make another handler:

1 app.get('/users', (req, res) => {


2 con.connect(function(err) {
3 con.query(`SELECT * FROM main.users`, function(err, result, fields) {
4 if (err) res.send(err);
5 if (result) res.send(result);
6 });
7 });
8 });
10. AWS Relational Database Service (RDS) 83

In your browser, navigate to localhost:3000/users and you should be presented with all the
inserted users:

Alternatively, you can send a GET request to localhost:3000/users and you’ll be able to see this
output:

1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]

We can narrow this down to only get a specific user if we have a field that we know is unique within
the database. For example, if our username was our unique key, we could do this:
10. AWS Relational Database Service (RDS) 84

1 app.get('/users/:username', (req, res) => {


2 const username = req.params.username;
3 con.connect(function(err) {
4 con.query(`SELECT * FROM main.users WHERE username = ? `, username, function\
5 (err, result, fields) {
6 res.send(result);
7 });
8 })
9 });

The :username part of the above code is called a parameter, and acts like a variable that we can throw
in, allowing us to retrieve whatever the user entered there. This allows us to make a GET request to
localhost:3000/users/testing, which would just return records with that username:

1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]

The output is the same as getting all users, since there’s only one user in the database. You’ll also
typically have a unique id for each user. RDS automatically adds a unique id field to each entry. It
automatically increments as well.
You can search for an entry by id using the exact same approach as for the username:

1 app.get('/users/:id', (req, res) => {


2 const username = req.params.username;
3 con.connect(function(err) {
4 con.query(`SELECT * FROM main.users WHERE id = ? `, id, function(err, result\
5 , fields) {
6 res.send(result);
7 });
8 })
9 });

Sending a GET request to localhost:3000/users/1 or by visiting that URL in a browser, we’re greeted
with the all-too familiar user:
10. AWS Relational Database Service (RDS) 85

1 [
2 {
3 "id": 1,
4 "username": "testing",
5 "email": "[email protected]",
6 "age": 25
7 }
8 ]

Finally, we might only want to retrieve a specific bit of information about a user - we could make
an endpoint that deals with this as follows:

1 app.get('/users/:username/age', (req, res) => {


2 const username = req.params.username;
3 con.connect(function(err) {
4 con.query(`SELECT age FROM main.users WHERE username = ? `, username, functi\
5 on(err, result, fields) {
6 res.send(result);
7 });
8 })
9 })

Sending a GET request to localhost:3000/users/testing/age results in:

1 [
2 {
3 "age": 25
4 }
5 ]

Here, we’re still taking in the parameter, but we’re adding another part to our endpoint to specify
the data we want to collect (and then coding that into our SQL query).

Update User Endpoint

What if we want to update that single user’s details? Let’s say our user wanted to change their email
- we’ll need another endpoint to perform that update. Let’s create a new endpoint, which accepts a
:username path variable, and extracts the new email from the request we send to it:
10. AWS Relational Database Service (RDS) 86

1 app.put('/users/:username/email', (req, res) => {


2 const username = req.params.username;
3 const email = req.query.email;
4 con.connect(function(err) {
5 con.query(`UPDATE main.users SET email = ? WHERE username = ?`, [email, user\
6 name], function(err, result) {
7 if (err) res.send(err);
8 if (result) res.send(result);
9 });
10 });
11 });

You’ll notice our second parameter in con.query is now an array - this array is being read in order
into each of the ? placeholders in the query itself - make sure you get the order right, otherwise
you’ll end up with weird results, at best.
Using the parameter from before, we can make a PUT request to the same user, but with a different
query. Let’s make a PUT request using Postman, and pass in our path parameter email:

If we head back to localhost:3000/users/testing in our browser, or if we simply send a GET request


to that path, we’ll see the updated email field as well:
10. AWS Relational Database Service (RDS) 87

Ideally, we don’t want to change too many fields at the same time, as this can lead to convoluted
queries and can increase the chances of making a mistake. When in doubt, refer to the S in SOLID
principles - single responsibility.

Delete User Endpoint

Finally, our user might decide that they no longer want to be part of our app. As sad as this is, we’ll
need a way to delete any information about them. We can do this with the following snippet:

1 app.delete('/users/:username', (req, res) => {


2 const username = req.params.username;
3 con.connect(function(err) {
4 con.query(`DELETE FROM main.users WHERE username = ?`, username, function(er\
5 r, result) {
6 if (err) res.send(err);
7 if (result) res.send(result);
8 });
9 });
10 });

Again, re-run the app and return to Postman, this time making the method for the request DELETE
to localhost:3000/users/testing:
10. AWS Relational Database Service (RDS) 88

You can check if this worked by sending a GET request to localhost:3000/users, which will return
an empty array if you haven’t added any other users in earlier stages:
10. AWS Relational Database Service (RDS) 89

Note: If you’re not planning on continuing to use your RDS instance at this moment,
make sure to terminate it! Otherwise, you’ll rack up a hefty bill within a month.
11. Cloud Computing
Cloud Computing has taken the world by storm. It introduced developers and businesses with
the possibility of offloading certain resource-heavy operations onto exceptionally powerful servers,
designed and built just for that purpose.
Storage and computing power, traditionally, was really expensive. The equipment itself is expensive,
even without the workforce that’s required to maintain it. This led to enterprises investing obscene
amounts of money into equipment and workforce, and small businesses were left without much
solutions.
This was a huge hinderance for smaller teams that didn’t have the financial stability to invest, and it
was also a hassle for those who could invest since server rooms are big and require proper ventilation
and maintenance.

If only somebody else could take the responsibility of these operations.

Naturally, everyone was looking into ways they can delegate this, and those with high computing
power and huge storage started offering their power to others - for a price.
In 2006., Amazon accelerated the advent of Cloud Computing by introducing the world to the Elastic
Compute Cloud (EC2).
There are three main Cloud Computing Models:

• Infrastructure as a Service (IaaS)


• Platform as a Service (PaaS)
• Software as a Service (SaaS)

IaaS is the most abstract to developers, but it’s also the most “physical” model. It takes care of
the underlying infrastructure typically done by network administrators. PaaS builds upon IaaS and
is familiar to developers. These are typically frameworks and engines used to build applications.
Finally, SaaS is the “end-stage”, familiar to end-users. These are deployed and working applications
on the web.
11. Cloud Computing 91

The services Amazon offers fall into the realm of IaaS and PaaS. Both of these are required in
order to build a SaaS - a service that someone will present to a user. With this, they’ve positioned
themselves as an advanced platform, aimed at developers, that allows them to build powerful, fast,
secure software.

Infrastructure as a Service (IaaS)

Infrastructure is the foundation of any system. The way your network works, how the computers
within it communicate, the operating systems on them and storage - all of this is fundamental.
Again, maintaining these takes time and resources, and these can be offloaded onto other parties
which are glad to give up their own computing power for these purposes, for some money.
Many IaaS providers are currently competing on the market:

• AWS
• DigitalOcean
• Microsoft Azure
• Google Compute Engine
• IBM SmartCloud Enterprise
• Apache CloudStack

These are delivered as on-demand resources for running networks and the underlying infras-
tructure. Typically, these providers will take care of application runtime, operating systems, data
storage/servers and virtualization.
This means that you don’t need the hardware or workforce for any of this. You pay as you go and
use the resources, saving you time, money and many headaches.
11. Cloud Computing 92

Platform as a Service (PaaS)

Platforms as a Service (PaaS) are familiar to developers. They’re typically provided as frameworks
used to build software upon. These typically include database and server support, operating systems
and language execution environments.
The players competing in this space are pretty similar to the players in the IaaS space:

• AWS
• Microsoft Azure
• Google Cloud App Engine
• IBM App Connect
• Oracle Cloud
• Heroku
• Apache Stratos

Using a PaaS instead of making things from scratch will cut coding time, make cross-platform
support easier and help manage the development cycle of an application.

Software as a Service (SaaS)

Finally, Software as a Service (SaaS), is what the end-user sees. These are finished, deployed/hosted
applications available on the web.
These are things such as Google Apps, Dropbox, MailChimp, Slack, HubSpot, etc. Some of these
services also offer an API that you can send requests to and connect to your custom-built application.
When working with an on-demand SaaS service, the third party you’re paying is managing every-
thing - the infrastructure, the platform and the software itself.
Imagine GitHub. You’re using their software solution to host code, using a popular tool - Git. If
you have a pro account, you also pay a monthly membership for that service. They manage the
infrastructure, the platforms used to build the software and all maintenance.
12. AWS Elastic Compute Cloud (EC2)
The AWS Elastic Compute Cloud (EC2) is an IaaS type-offering from Amazon. It provides scalable
computing power, which you can really use in any way you’d like. Typically, people set up virtual
servers and storage and deploy applications to EC2 for quick, easy and cheap app provisioning.

EC2 is a core part of AWS, and a lot of AWS’ other services are built on top of it.

It works by providing computing environments, known as instances. These instances run Amazon
Machine Images (AMIs), and with them, you’ve got most of the things required to run a web
application preconfigured.
In this chapter, we’re going to create a Node.js app with Docker, start and configure an EC2 instance,
and deploy our app to it. By the end of the chapter, you’ll have your Node app running on AWS, and
a better understanding of how to interact with a core AWS service.

Demo Application
Let’s make a simple Node application that responds to a request. Let’s make a directory for the app,
move into it and initialize it with the default configurations:

1 $ mkdir node-ec2
2 $ cd node-ec2
3 $ npm init -y

Once the package.json file is created, open it up and add the following line to the beginning of the
scripts section:

1 "start": "node index.js"

Instead of running node index.js, we’ll be using npm start, which will run everything in our script.
To serve our requests, we’re going to be using Express, as usual:

1 $ npm i express --save

Our package.json should now look something like this:


12. AWS Elastic Compute Cloud (EC2) 94

1 {
2 "name": "app",
3 "version": "1.0.0",
4 "description": "",
5 "main": "index.js",
6 "scripts": {
7 "start": "node index.js"
8 },
9 "author": "",
10 "license": "ISC",
11 "dependencies": {
12 "express": "^4.17.1"
13 }
14 }

And to get started, let’s create an index.js file:

1 $ touch index.js

Within it, we’ll set up Express and make a single request handler:

1 const express = require('express');


2 const app = express();
3 const port = 3000;
4
5 app.get('/status', (req, res) => res.send({status: 'I\'m alive!'}));
6
7 app.listen(port, () => console.log(`Example app listening on port ${port}!`));

This app will start on port 3000, and will serve an endpoint at /status. We can verify this works by
running:

1 $ npm start
2 Example app listening on port 3000!

Heading to http://localhost:3000/status - we should get a response back with {status: "I'm


alive!"}.

Alternatively, using curl:

1 $ curl 'localhost:3000/status'

Will return:
12. AWS Elastic Compute Cloud (EC2) 95

1 {"status": "I'm alive!"}

With our simple Node application ready, let’s turn it into a Docker image which we’ll deploy to EC2.
We’ll first publish this image to Docker Hub, and while setting up an EC2 instance, we’ll read from
this image for the deployment.

Dockerizing the Node Application

Create a new file in the same directory as your Node application, called Dockerfile, in which we’ll
define set up a few instructions:

1 FROM node:13-alpine
2
3 WORKDIR /usr/src/app
4
5 COPY package*.json ./
6
7 RUN npm install
8
9 COPY . .
10
11 EXPOSE 3000
12 CMD [ "npm", "start" ]

This is a basic Dockerfile that can be used for most simple Node applications. Next, let’s build the
Docker image and then run it to verify it’s working correctly:

1 $ docker build . -t ec2-app


2 $ docker run -p 3000:3000 ec2-app

If you navigate to http://localhost:3000/status again, you should see the exact same output:

1 {"status": "I'm alive!"}

Since it’s working, let’s push our Docker image to Docker Hub¹⁶:

1 $ docker login # Use your Docker Hub credentials here


2 $ docker tag ec2-app <YOUR_DOCKER_USERNAME>/ec2-app
3 $ docker push <YOUR_DOCKER_USERNAME>/ec2-app
¹⁶https://hub.docker.com/
12. AWS Elastic Compute Cloud (EC2) 96

Setting up EC2
With our application “dockerized”, we need to set up an EC2 instance for it to run on. Head to AWS
and log in.
Click the ‘Services’ dropdown menu at the top of the page, and search for ‘EC2’. This will lead you
to the EC2 Dashboard:

This is the page with the summary of our current instances. Obviously, there are 0 running so far,
with 0 dedicated hosts, key pairs, etc… This view also gives us peek at the service’s health and if
everything is running as it should be, over different zones.
Select the ‘Instances’ link on the left. Here is where we’ll be setting up the aforementioned instance
for our application:

On the next view, click the ‘Launch Instance’ button. You’ll see a page that looks like this:
12. AWS Elastic Compute Cloud (EC2) 97

AMIs

This is where we select the Amazon Machine Image - or AMI for short. An AMI is an ‘out of the
box’ server, and can come with multiple configurations.
For instance, we could select one of the Quick Start AMIs that have Amazon Linux 2¹⁷ on them, or
if you scroll down, there are instances with Ubuntu running on them, etc.
Each AMI is a frozen image of a machine with an operating system and potentially some extra
software installed.
To make things easy, we can look for an EC2 instance with Docker already configured for us!
To do this, we’ll go to the ‘AWS Marketplace’ on the left. Searching for ‘ECS’ should yield us the
‘ECS Optimized Amazon Linux 2 Image’.
This image comes with Docker, and is optimized for running containers. Hit ‘Select’ on the chosen
image and we’ll continue to the next page:

¹⁷https://aws.amazon.com/amazon-linux-2/
12. AWS Elastic Compute Cloud (EC2) 98

Instance Types

On the next view, we select what type of instance we want. Generally, this dictates the resources
available to the server that we’re starting up, with scaling costs for more performant machines:

The t2.micro instance type is eligible for the free (demo) tier, so it’s recommended to use that when
you’re just getting started:

Select the appropriate checkbox, and then click ‘Review and Launch’ in the bottom right corner. This
leads you to the “Review” page, where you can take a look at the selected options so far:
12. AWS Elastic Compute Cloud (EC2) 99

If all looks good, click ‘Launch’, and you’ll get a popup to select or create a key-pair.
Like with other services, we’ll use this key pair to connect our application to the service. Select the
first drop-down, and select ‘Create a new key pair’. Under ‘Key pair name’, enter what you’d like
to call it:
12. AWS Elastic Compute Cloud (EC2) 100

Make sure to ‘Download the Key Pair’ on the right hand side as a .pem file. By selecting ‘Launch
Instance’ again, your EC2 instance should get started up:

Click the highlighted link to be taken to the instance detail page.


12. AWS Elastic Compute Cloud (EC2) 101

Security Groups

Before we try running our application, we need to make sure that we’ll be able to access the
instance and the app. Most AWS resources operate under ‘Security Groups’ - these groups dictate
how resources can be accessed, on what port, and from which IP addresses.
In the previous chapter on RDS, while editing the public accessibility, we’ve tweaked the security
groups of that instance. Specifically, RDS is tied to EC2. You might’ve already noticed that while
editing the inbound rules, you were redirected to the EC2 dashboard.
Now, we’ll be editing the inbound rules again, for this EC2 instance. First, go to your instance
dashboard:

Enter your instance, where you can see information about it:

Then, under the “Security” tab, you’ll be able to see the “Security Group”:
12. AWS Elastic Compute Cloud (EC2) 102

Clicking on this group will lead you to the dashboard where you can modify it. Select “Edit Inbound
Rules”. This is the exact same prompt we had for RDS. This time around, we’ll allow our app running
on port 3000 to access it:

What this means is that traffic that comes in through port 22, using the TCP protocol, is allowed
from anywhere (0.0.0.0/0 meaning anywhere). We need to add another rule to allow anybody to
access our app at port 3000.
12. AWS Elastic Compute Cloud (EC2) 103

Running the Application on EC2

Head back to the ‘Instances’ page (click the link on the left) and select the instance you created
earlier. The address for your EC2 instance is located under the “Public IPv4 DNS” at the top of the
page:

Head back to the terminal, and navigate to the folder where the key-pair you downloaded earlier is
located. It will be named as whatever you entered for the key-pair name, with a .pem as its extension.
Let’s change the key’s permissions and then SSH into the EC2 instance:

1 $ chmod 400 <NAME_OF_KEYPAIR_FILE>


2 $ ssh -i <NAME_OF_KEYPAIR_FILE>ec2-user@<PUBLIC_DNS>

Now, we can run commands on the EC2 instance. From here, we just need to launch our app from
Docker Hub. Since the EC2 instance is already configured to have Docker installed and ready, all
we need to do is:

1 $ docker run -p 3000:3000 <YOUR_DOCKER_USERNAME>/ec2-app

You’ll be able to reach the instance using the same address you used to SSH into the instance. Simply
navigate in your browser to:

1 <PUBLIC_DNS>:3000/status

Your app should return the status endpoint to you that we saw earlier. Congratulations, you’ve just
run your first app on EC2!

Run Your App Headlessly

A quick win, however the trick is to run the app “headless”. As of now, your app is running in your
current shell session - and as soon as you close that session, the app will terminate!
To start the app in a way that it’ll keep running in the background, run the app with the additional
-d flag:
12. AWS Elastic Compute Cloud (EC2) 104

1 $ docker run -d -p 3000:3000 <YOUR_DOCKER_USERNAME>/ec2-app

Now, you can close the terminal and it’ll continue running.

Security

You might want to go back and tighten up the security on the instance/experiment with different
configurations - such as configuring it so that only we can access the SSH port, for example.
Change the ‘Source’ field on the first rule to ‘My IP’ - AWS will automatically figure out where
you’re accessing it from.
Note: If you’re running through this chapter on the move, or come back to it later, your computer
might have a different IP than when you initially set ‘My IP’. If you encounter any difficulties later
on, make sure to come back here and select ‘My IP’ again.

Other AMIs

There are hundreds of different AMIs, a lot from various communities, with applications already
pre-installed - it’s worth taking a look through to see if there’s an easy way to set up something
you’ve wanted to work with.

Managing AWS EC2 Instances with Node


Now that EC2 is set up and running nicely, it’s time to interact with it through code. Some of the
common tasks you’ll want to perform through code are - creating an instance, starting and instance,
stopping an instance, rebooting an instance and terminating an instance.
Writing code for this automates the manual process of having to access the dashboard and perform
these tasks through the user interface. Also, you can give someone else programmatic access without
letting them access the entire dashboard manually.

Creating an EC2 Instance

Let’s create a project directory, move into it and start a default Node project:

1 $ mkdir ec2-node
2 $ cd ec2-node
3 $ npm init -y

Then, we’ll install the aws-sdk required for interacting with our EC2 instance:
12. AWS Elastic Compute Cloud (EC2) 105

1 $ npm i aws-sdk --save

Then, we can create our entry-point - index.js:

1 $ touch index.js

And within it, let’s import the AWS SDK and initialize an EC2 instance:

1 const AWS = require('aws-sdk');


2 const credentials = new AWS.SharedIniFileCredentials({profile: 'ec2_profile'});
3
4 // Configure the region and credentials
5 AWS.config.update({credentials:credentials, region:'us-east-1'});
6
7 // Instantiating an EC2 object
8 const ec2 = new AWS.EC2({apiVersion: '2016-11-15'});

In the credentials file, we’ve added another user profile - [ec2_user] for which we’ve created a
new IAM user with the AmazonEC2FullAccess policy, just like we did for SNS and SQS.
As usual, we’ll use this ec2 instance to send requests to AWS. Now, let’s define the parameters
required to create an instance.
Thinking back of the things we set up on the dashboard manually, we’ll put them in now as well:

1 let instanceParams = {
2 ImageId: 'ami-0669eafef622afea1',
3 InstanceType: 't2.micro',
4 KeyName: 'ec2-keypair',
5 MinCount: 1,
6 MaxCount: 1
7 }

The ImageId refers to the ID of the AMI you’d like to use. This is the ID of the same AMI we used
in the manual section. You can get the ID of an AMI through the AWS dashboard.
Since t2.micro is eligible for the free tier, we’ve also used it here for the InstanceType. Then, we
provide the ec2-keypair.pem file’s location to the KeyName parameter, followed by a MinCount and
MaxCount. This defines how many instances we want the EC2 client to create - at least MinCount,
and at most MaxCount. In this case, it’s just 1.
Now, let’s use the ec2 instance with these parameters to initiate a request to create an EC2 instance:
12. AWS Elastic Compute Cloud (EC2) 106

1 let instancePromise = ec2.runInstances(instanceParams).promise();


2
3 instancePromise.then(data => {
4 console.log(data);
5 let instanceId = data.Instances[0].InstanceId;
6 console.log('Created instance', instanceId);
7 });

This will result in a JSON response from AWS:

1 {
2 Groups: [],
3 Instances: [
4 {
5 AmiLaunchIndex: 0,
6 ImageId: 'ami-0669eafef622afea1',
7 InstanceId: 'i-0609b562cd4c8d9ea',
8 InstanceType: 't2.micro',
9 KeyName: 'ec2-keypair',
10 LaunchTime: 2020-11-04T02:42:52.000Z,
11 Monitoring: [Object],
12 Placement: [Object],
13 PrivateDnsName: 'ip-172-31-21-216.ec2.internal',
14 PrivateIpAddress: '172.31.21.216',
15 ProductCodes: [],
16 PublicDnsName: '',
17 State: [Object],
18 StateTransitionReason: '',
19 SubnetId: 'subnet-06d0924b',
20 VpcId: 'vpc-579f512a',
21 Architecture: 'x86_64',
22 BlockDeviceMappings: [],
23 ClientToken: '3d07c3be-80f3-4ddf-a107-f1b9a4cee746',
24 EbsOptimized: false,
25 EnaSupport: true,
26 Hypervisor: 'xen',
27 ElasticGpuAssociations: [],
28 ElasticInferenceAcceleratorAssociations: [],
29 NetworkInterfaces: [Array],
30 RootDeviceName: '/dev/xvda',
31 RootDeviceType: 'ebs',
32 SecurityGroups: [Array],
33 SourceDestCheck: true,
12. AWS Elastic Compute Cloud (EC2) 107

34 StateReason: [Object],
35 Tags: [],
36 VirtualizationType: 'hvm',
37 CpuOptions: [Object],
38 CapacityReservationSpecification: [Object],
39 Licenses: [],
40 MetadataOptions: [Object],
41 EnclaveOptions: [Object]
42 }
43 ],
44 OwnerId: '<OWNER_ID>',
45 ReservationId: 'r-0dc71b485bd2eea55'
46 }
47 Created instance i-0609b562cd4c8d9ea

Here, we can take a look at the settings used to instantiate our EC2 instance. Currently, no public
DNS is available, in the response, but you can get it from the AWS dashboard if you’d like. Generally,
you’ll be accessing the instance through its id, which is provided here.
Taking a look at the dashboard, we now have two instances up and running:

To manage this instance, you can use any of the several functions provided for these purposes -
describeInstances(), monitorInstances(), startInstances(), stopInstances() and rebootInstances().

Let’s take a look at how we can stop this instance, since it’s already running, and how we can start
it back up again. Also, we can reboot this instance instead of stopping and starting in sequence.

Stopping an EC2 Instance

Now with our instance running, and the instance id in hand, we can go ahead and stop it progra-
matically:
12. AWS Elastic Compute Cloud (EC2) 108

1 const AWS = require('aws-sdk');


2
3 const credentials = new AWS.SharedIniFileCredentials({profile: 'ec2_profile'});
4
5 // Configure the region and credentials
6 AWS.config.update({credentials:credentials, region:'us-east-1'});
7
8 // Instantiating an EC2 object
9 const ec2 = new AWS.EC2({apiVersion: '2016-11-15'});
10
11 let instanceParams = {
12 InstanceIds: [
13 '<INSTANCE_ID>'
14 ]
15 }
16
17 let instancePromise = ec2.stopInstances(instanceParams).promise();
18
19 instancePromise.then(data => {
20 console.log(data);
21 });

This is done in much the same way we perform other requests - set up the required params, pass it
to the adequate method, and read the response.
Here, the InstanceIds accepts an array, assuming that we might have multiple instances we want
to stop running. We’ve put one instance id in here - the one we’ve just created.
Let’s run this code:

1 $ node index.js

This produces the response:

1 {
2 StoppingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-0609b562cd4c8d9ea',
6 PreviousState: [Object]
7 }
8 ]
9 }
12. AWS Elastic Compute Cloud (EC2) 109

A quick look at the dashboard confirms that it has stopped running:

Starting an EC2 Instance

In the same vein, let’s start this instance back up:

1 const AWS = require('aws-sdk');


2
3 const credentials = new AWS.SharedIniFileCredentials({profile: 'ec2_profile'});
4
5 // Configure the region
6 AWS.config.update({credentials:credentials, region:'us-east-1'});
7
8 // Instantiating an EC2 object
9 const ec2 = new AWS.EC2({apiVersion: '2016-11-15'});
10
11 let instanceParams = {
12 InstanceIds: [
13 '<INSTANCE_ID>'
14 ]
15 }
16
17 let instancePromise = ec2.startInstances(instanceParams).promise();
18
19 instancePromise.then(data => {
20 console.log(data);
21 });

When we run the code:

1 $ node index.js

We get the response:


12. AWS Elastic Compute Cloud (EC2) 110

1 {
2 StartingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-0609b562cd4c8d9ea',
6 PreviousState: [Object]
7 }
8 ]
9 }

And a quick look confirms that the instance has started running:

Rebooting an EC2 Instance

When you’d like to reboot an instance - sure, you can just turn it off and on again, though, to avoid
duplicating code, you can just use the rebootInstances() function:

1 const AWS = require('aws-sdk');


2
3 const credentials = new AWS.SharedIniFileCredentials({profile: 'ec2_profile'});
4
5 // Configure the region
6 AWS.config.update({credentials:credentials, region:'us-east-1'});
7
8 // Instantiating an EC2 object
9 const ec2 = new AWS.EC2({apiVersion: '2016-11-15'});
10
11 let instanceParams = {
12 InstanceIds: [
13 'i-0609b562cd4c8d9ea'
14 ]
15 }
16
17 let instancePromise = ec2.rebootInstances(instanceParams).promise();
18
12. AWS Elastic Compute Cloud (EC2) 111

19 instancePromise.then(data => {
20 console.log(data);
21 });

Let’s run this:

1 $ node index.js

And the EC2 instance is rebooted.

Terminating an EC2 Instance

Finally, you might want to terminate an instance. Let’s go ahead and terminate both of these
instances, as running them racks up our bill, which is best avoided if we won’t use them immediately
after this:

1 const AWS = require('aws-sdk');


2
3 const credentials = new AWS.SharedIniFileCredentials({profile: 'ec2_profile'});
4
5 // Configure the region
6 AWS.config.update({credentials:credentials, region:'us-east-1'});
7
8 // Instantiating an EC2 object
9 const ec2 = new AWS.EC2({apiVersion: '2016-11-15'});
10
11 let instanceParams = {
12 InstanceIds: [
13 'i-0609b562cd4c8d9ea',
14 'i-08fd5ad87f9ede25e'
15 ]
16 }
17
18 let instancePromise = ec2.terminateInstances(instanceParams).promise();
19
20 instancePromise.then(data => {
21 console.log(data);
22 });

Let’s run this code:


12. AWS Elastic Compute Cloud (EC2) 112

1 $ node index.js

And we get the response:

1 {
2 TerminatingInstances: [
3 {
4 CurrentState: [Object],
5 InstanceId: 'i-08fd5ad87f9ede25e',
6 PreviousState: [Object]
7 },
8 {
9 CurrentState: [Object],
10 InstanceId: 'i-0609b562cd4c8d9ea',
11 PreviousState: [Object]
12 }
13 ]
14 }

Taking a look at the dashboard, we can see that both instances are terminated:
13. Serverless Computing
So far, throughout all of the Cloud Computing models we’ve talked about - one thing was common.
Although someone else maintained the servers, underlying infrastructure and operating systems,
you’d typically have access to this if you so desired.
For example, you can launch an empty EC2 instance and install the software you’d like or customize
it per your needs. You’re renting a virtual machine. The entire thing.
Through time, another Cloud Computing Model came to be - Function as a Service (FaaS). FaaS is
a Cloud Computing model in which you don’t rent out an entire virtual machine. Rather, you rent
out a bit of the computing power to run certain pieces of code and pay as you go.
Regardless of the provider, the flow looks pretty similar:

• Deploy your code to a FaaS provider


• Trigger that code via a HTTP request or other associated service
• The service runs your code

Some of the more popular providers include:

• AWS Lambda
• Google Cloud Functions
• Microsoft Azure Functions
• IBM/Apache’s OpenWhisk
• Oracle Cloud Fn

Functions as a Service are often used in the microservice architecture - where instead of a whole
microservice, you can spin up a simple function that’ll respond to a request, process it and forward
it to another service, or anything along those lines.
For example, uploading an image to a service can trigger a serverless function to process that image,
send a notification to the user or administrator, or update some visualization/analytics service with
the new info.
In terms of AWS - their Lambda service works wonders with many of their services. You can send
AWS SNS notifications to a Lambda function, kicking off a job. You can send queued messages using
SQS, respond to S3 events, react to real-time Kinesis data, transform data and store it into an RDS
database.
Here’s an example - a user uploads an image, and the service layer of your application triggers a
Lambda Function, as the image is being saved onto a file hosting service. This Lambda Function
13. Serverless Computing 114

then triggers an SNS Topic to publish an update message to the team members or users of the
website, informing them of the addition. It also kicks off an email service, which uses an email
service provider like Gmail to send a confirmation to the user who uploaded the image:

Really, you can make any combination of these services - instead of an email service provider, you
can also use SNS, or use SQS to store several images in a queue to be batch-processed at a later date.
Instead of the service layer triggering the Lambda function, you could’ve relied on S3 events, and
have an all-AWS system.
Effectively, AWS Lambda can be used in pretty much any back-end service to replace a simple
microservice or a task. What’s really useful about it is that you just input the code and let it run.
Nothing more, nothing less.
Typically, you’d combine a service like this with other services. Specifically, AWS API Gateway is a
common combination with Lambda. API Gateway is a simple service that listens to HTTP requests
and has a built-in communication path with Lambda.
Once you hit an endpoint, it triggers a Lambda function.
14. AWS Lambda
AWS Lambda works by creating containers for the functions you create using the AWS console. The
first time it’s run, it’ll take a bit to initialize the container - but all subsequent calls will be stable,
consistent and fast.
What makes Lambda very useful is that you can easily set what events trigger the execution of
Lambda code and what happens with that result.
For example, you can make an API that upon receiving a GET request, returns a response to Lambda,
which processes it and send the result to an SNS topic.
You can upload already written code to AWS Lambda or simply write it in their code editor. Let’s
start off by creating a Function using the AWS console.

Creating a Function
In the AWS console, under “Compute” products, you can find AWS Lambda:

You’ll be prompted with a ‘Create a Function’ button:


14. AWS Lambda 116

You’ll be faced with three options. You can use a blueprint which will bootstrap you with some com-
mon boilerplate code. This is anything from running an external process using the child_process
module to a script that listens to S3, triggers upon upload and retrieves the metadata of the uploaded
file:

You’ll also be presented with the option to browse the serverless app repository. This differs from
the previous option, as here, you deploy functioning apps. Each of these have a GitHub link to their
source code:
14. AWS Lambda 117

These are great starting points and can help you out if you’re working a lot with AWS Lambda.
However, we’ll be authoring a function from scratch:

Feel free to choose a name for the function and the runtime for the container. We’ll be using Node,
obviously.
Finally, we’ve got to set an execution role for our function. You may not want the function to have
access to some other services, but have access to other services. For basic usage, feel free to select
the basic Lambda permissions:
14. AWS Lambda 118

You can also select existing roles here, in which you can be as detailed with the access as you’d like.
For now, we’ll start off with the basic role.
Clicking “Create Function” will create your function with the information you’ve provided:

Now, we can start coding the function. This can be achieved through three different ways - editing
it inline, uploading a ZIP file with code or uploading from S3.

Working with the Lambda Designer


Let’s start off with inline code. Opening your function, you’ll be greeted with:
14. AWS Lambda 119

Some sample code is already populated in the inline editor. If you go ahead and click “Test”, the
following message will appear in the on-screen console:

1 Response:
2 {
3 "statusCode": 200,
4 "body": "\"Hello from Lambda!\""
5 }
6
7 Request ID:
8 "e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57"
9
10 Function logs:
11 START RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57 Version: $LATEST
12 END RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57
13 REPORT RequestId: e5668f9b-0fa6-4029-8a8c-a0bdf6f45a57 Duration: 1.24 ms Billed Dura\
14 tion: 100 ms Memory Size: 128 MB Max Memory Used: 64 MB

If need be, you can set environment variables, tags, basic settings, etc. right below the code:
14. AWS Lambda 120

Above the function’s code, you’ll see the “Designer”:

Here, you can see a trigger and a destination with the Lambda function being between them. This
is where the example before ties in - say we want to trigger the function from an API, have the
function process that request and send the results of that to another service, such as SNS.
First, let’s take care of the trigger:
14. AWS Lambda 121

Many things can be set as a trigger for AWS Lambda, and the API Gateway is a common one. The
AWS API Gateway is a really useful service that can be used to bootstrap APIs in seconds. You can
build RESTful or WebSocket APIs and have them served up for another service to use.
Let’s select the API Gateway and input some info about our API:
14. AWS Lambda 122

Here, you can choose between creating a new API or selecting an existing one. Since we don’t have
one already, let’s create it. It’ll be an HTTP API, with open security. Alternatively, for HTTP APIs,
you can create a JWT authorizer, and for a REST API, you can set an API key or an IAM.
In the additional settings, we’ve set the name and deployment stage of the API:
14. AWS Lambda 123

Finishing this setup, your trigger is added:

The API endpoint can be found on the highlighted link. Hitting that endpoint will return:
14. AWS Lambda 124

When we hit the API endpoint, it sends an event to the service it’s wired to in the designer. In our
case, it sends an event to the Lambda function. The default implementation of the function is:

1 exports.handler = async (event) => {


2 // TODO implement
3 const response = {
4 statusCode: 200,
5 body: JSON.stringify('Hello from Lambda!'),
6 };
7 return response;
8 };

The event parameter contains the current event info. In our case, it’s the event from the HTTP
request.
This event is picked up and the function just returns the response. Since this is analogous to a “Hello
World!” message, let’s change the code in the editor to something else:

1 exports.handler = async (event, context) => {


2 let min = 0;
3 let max = 10;
4 let randomNumber = Math.floor(Math.random() * (max - min +1)) + min;
5
6 const json = {
7 'Random Number' : randomNumber,
8 'Function Name' : context.functionName,
9 'Memory Limit (MB)' : context.memoryLimitInMB
10 }
11
12 const response = {
13 statusCode: 200,
14 body: JSON.stringify(json)
15 };
14. AWS Lambda 125

16 return response;
17 };

The context contains all the information about the function itself. How long it has been running,
how much memory it’s consuming among other things. This is viewed as the runtime information.
Here, we’ve generated a random number, between 0 and 10. We put that number in a json object,
alongside the function’s name and memory limit, extracted from the context object.
Then, we’ve stringified this json object and put it in the body of the response which is returned by
Lambda.
Don’t forget to “Deploy” the code, using the “Deploy” button, which will commit the change to our
Lambda function. Then, let’s hit the endpoint again with a GET request via the browser:

Great, we’ve connected an API and returned the result of the Lambda function! It returned a random
number, the name of our function and the memory limit on our function.
Now, instead of just returning the result, let’s send it off to another service, like SNS. To do so, we’ll
add a destination for our result, in the designer:
14. AWS Lambda 126

Here, you’re faced with a few options. The Source in this sense is considered the Lambda function.
It can be an asynchronous or stream invocation. Stream invocations are for streaming/real-time
services such as Kinesis or DynamoDB. Since that’s not what we’re doing, we’ll be going with an
asynchronous invocation.
Depending on the service you’d like to invoke, your function will have to have certain roles. Here,
if we just try to use it, AWS will attempt to give our function the correct role for that service. If it
fails, you can always set the role yourself by going to “Permissions”:
14. AWS Lambda 127

And then to the link of the Role name:

Here, we can see that AWS automatically gave it the AWSLambdaSNSTopicDestinationExecutionRole,


besides the already existing AWSLambdaBasicExecutionRole.
Now, we’ve set this to run On Success. The event object has a Success field. By default, it’s empty
as we define what’s a successful run and what’s not.
We’ll want to use the AWS CLI to invoke this function, sending a payload of '{"Success:true"}'.
For some reason, the Test button doesn’t seem to work in a lot of cases and the Success field of the
event just stays empty. This means that the “On Success” condition is never really reached with the
test button.
On your AWS CLI, let’s invoke the Lambda function:

1 $ aws lambda invoke --function-name myFirstFunction --invocation-type Event --payloa\


2 d "{\"Success\": true}" response.json --cli-binary-format raw-in-base64-out

Since we’re using the CLI, make sure that the IAM user you’re using has the AWSLambdaFullAccess
policy attached, or at least has permission to use lambda:InvokeFunction.
14. AWS Lambda 128

In the command, it’s important to set the --cli-binary-format to raw-in-base64-out, since by


default it only accepts Base64, and we’re passing in JSON.
This call results in:

1 {
2 "StatusCode": 202
3 }

And we can verify that we’ve gotten an SMS and email from our SNS topic, with the information
regarding the call to SNS:
14. AWS Lambda 129
14. AWS Lambda 130

Working with Lambda Editor


Alternatively, if the designer isn’t your preferred game - you can completely ditch it and do
everything through code.
For example, let’s import the AWS SDK and send an SNS message with the approach more familiar
to us from the previous chapters:

1 const AWS = require('aws-sdk');


2 const sns = new AWS.SNS();
3
4 exports.handler = (event, context) => {
5 let min = 0;
6 let max = 10;
7 let randomNumber = Math.floor(Math.random() * (max - min +1)) + min;
8
9 let json = {
10 'Random Number' : randomNumber,
11 'Function Name' : context.functionName,
12 'Memory Limit (MB)' : context.memoryLimitInMB,
13 }
14
15 let snsParams = {
16 Message: JSON.stringify(json),
17 Subject: 'Lambda Function Information',
18 TopicArn: 'arn:aws:sns:us-east-1:867901910243:myStackAbuseTopic'
19 }
20
21 sns.publish(snsParams, context.done);
22 };

Here, we start up an AWS.SNS() instance, construct a json object with the information we want to
send, set up the parameters for SNS and publish the message.
NOTE: If you make this function async and await the sns.publish() call, or capture the Promise
returned from sns.publish().promise(), your function likely won’t work. The function will time
out and terminate before the Promise can be fulfilled and the message will never be sent to the topic.
Let’s hit our API Gateway endpoint again:
14. AWS Lambda 131

And this, in turn, calls the code from the Lambda function, which sends us an SMS message:
14. AWS Lambda 132
Additional Resources
Thank you for making your way to the end of the book. We hope you’ve enjoyed it and found it
informative.
If you found any bugs, typos or mistakes, please feel free to let us know, and we’ll update the book
as soon as possible.
Over time we will continually update this book by fixing issues, updating information, adding new
relevant content, etc.

Have any feedback on what could be fixed/changed/added? Feel free to contact us! Our
email is [email protected].

Any sort of feedback, or a review is highly appreciated!


If you’re interested in learning more about AWS, here are some pointers:

Amazon Web Services

• Amazon SDK for NodeJS¹⁸


• Amazon JS Developer Forum¹⁹
• Explore JavaScript on AWS²⁰
• What’s New in AWS?²¹
• AWS News Blog²²
• AWS Training and Certification²³
• AWS JS Developer Guide²⁴

GitHub

• Official AWS SDK GitHub²⁵


• StackAbuse GitHub for Book Examples²⁶

¹⁸https://aws.amazon.com/sdk-for-node-js/
¹⁹https://forums.aws.amazon.com/forum.jspa?forumID=148
²⁰https://aws.amazon.com/developer/language/javascript/
²¹https://aws.amazon.com/new/?whats-new-content-all.sort-by=item.additionalFields.postDateTime&whats-new-content-all.sort-
order=desc
²²https://aws.amazon.com/blogs/aws/
²³https://aws.amazon.com/training/
²⁴https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/welcome.html
²⁵https://github.com/aws/aws-sdk-js
²⁶https://github.com/StackAbuse/sa-ebook-getting-started-with-aws-in-node-js-code

You might also like