0% found this document useful (0 votes)
118 views

cdk workshop

Uploaded by

metalic0071
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
118 views

cdk workshop

Uploaded by

metalic0071
Copyright
© © All Rights Reserved
Available Formats
Download as TXT, PDF, TXT or read online on Scribd
You are on page 1/ 28

CDK Workshop

About this workshop


Attention
You can post questions about this workshop in the #cdk-workshop Slack channel.
Please submit a SIM issue if you encounter any errors. If you would like to
contribute an update, see Editing documentation and reviewing changes in the
BXDocs Writing Guide.

AWS Cloud Development Kit (CDK) is a framework for writing your infrastructure as
code. At the end of this workshop you learn about using CDK internally at Amazon,
including how to use CDK pipelines, various constructs using TypeScript, basic CDK
commands, how to secure CDK constructs and how to test CDK infrastructure. You will
also get to create a state machine that will call a Lambda, do simple arithmetic on
a list of numbers, save the result in a DynamoDB table, and email it to you.

We will be using Typescript as the CDK language of choice to create and manage our
infrastructure. Builder Tools’ Golden path architecture recommendation is to use
the TypeScript version of the CDK, and some constructs, such as those provided by
MoteCDK, are only vended in TypeScript. You will spend more time learning
CDK/CloudFormation rather than a new language. In addition, it will be easier to
find examples in TypeScript since the majority of CDK users choose TypeScript to
build their applications.

The Lambda function we will be integrating with is written in Java, although we


won’t be making any code changes to that package in this workshop. However, we also
offer instructions on using Python or TypeScript as the Lambda function
implementation if you’d prefer to use one of those.

Who is this workshop for? Amazon SDEs who want to learn about using CDK internally
at Amazon, including how to use CDK pipelines, various constructs using typescript,
basic CDK commands, how to secure CDK constructs and how to test CDK
infrastructure.

Note
The section about Mote CDK is currently only approved for CDO usage and not for AWS
teams.

What use-cases is this workshop good for? (1) coordinate multiple AWS services into
serverless workflows using step functions (2) Event-driven architectures: when you
need to process AWS triggers (S3 event, SQS message, DynamoDB Stream, etc) that
make calls to other AWS services or internal Amazon services.

What should you know or do before you start? Create or use existing Conduit,
Isengard or Burner account (with Burner account, plan to complete the entire
workshop while the account is still active).

ay 1: Build your app


What will you have at the end of this day? An application that does the following:
you provide 2 numbers to a Step Function Flow, a Lambda is called and does a simple
calculation on those numbers, the result is sent back to the Step Function Flow and
is saved in DynamoDB, and lastly the result is sent to your email. You will have a
CI/CD pipeline and 3 packages: CDK, Tests, and Lambda (Java).

What will you learn? You will understand the basics of CDK with TypeScript: what
are CDK stacks, how to add AWS resources to your stacks. You will also learn and
use a number of CDK related brazil-build commands.

Step 1: Create a Lambda


Keyboard time: 5 minutes

Idle time: 50 min

In this step we use BuilderHub Create to create pipeline, Java and CDK packages.
Then we pull source code to our dev machine and build the workspace. We will
predominantly use CDK package in all the steps, but it’s still helpful for you to
pull all packages locally.

Create a Burner account


In order to make everything easier to clean up, it is strongly recommended to use a
burner account (a short-lived AWS account):

Go to AWS Burner Accounts .

Provide an account name and check the first checkbox to acknowledge that usage is
not free.

The account may take 5-15 minutes to create. Note the account ID.

Create an IAM role


Note
You may have to run the following command if you get the error Could not fetch
credentials from both Isengard, Conduit, or from environment.

ada credentials update --account=YOUR_AWS_ACCOUNT_ID --provider=conduit --


role=IibsAdminAccess-DO-NOT-DELETE --once
This role allows BuilderTools to create the Lambda in your account.

Run the following on your laptop (or Cloud Desktop):

kinit && mwinit -o


toolbox registry add s3://buildertoolbox-registry-hub-create-us-west-2/tools.json
toolbox install hub-create
toolbox install ada
hub-create prepare -a YOUR_AWS_ACCOUNT_ID
Create basic Java Lambda
We’ll use BuilderHub Create to set up the basic resources needed for this workshop.
Go to BuilderHub Create , choose the Basic Java Lambda source application. Click
Next and fill out the form with these parameters:

Important
Throughout this lab, you will be asked to use your Amazon username as part of a
command or resource name. Replace all instances of username with your own username,
following the capitalization of the text (e.g. username or Username).

Details Section

Dependency Management System: Select Classic Brazil. We will be upgrading the


workshop to use Peru soon, but for now, choose Brazil.

Clone Name: This can be whatever you want, but to make following along with this
workshop easier, we recommend calling it CdkWorkshopUsername

Description: Whatever you want, for example CDK Workshop Lambda

Owning Team: If you don’t already have one, we recommend creating a Team and POSIX
group called username-only just for yourself to use for personal projects.
To do this:

Go to Create a team .

For Team Name, use username-only.

For Team Description, use username-only.

For Secondary Owner, use the username of someone on your team, for instance your
onboarding buddy or your manager.

For Membership Rule, leave all options unchecked.

For Additional Members, enter your username and add yourself, and add any reason of
your choice in Add Override Reason.

Leave CTI blank.

For Email, use your @amazon email address.

Click Preview New Team Membership and create the team.

After your team has been created, follow the instructions in the step-by-step guide
to attach a POSIX group with the same name (username-only) to the team. Waiting
time for the new team and POSIX group to show up in the CreateHub dropdown can be
minutes or even hours!

Posix group owner: The username-only POSIX group you created above. This may take a
little bit of time to appear after creation. If you’re still having issues finding
the group after waiting a bit and reloading the page, go back to your team and look
under Members to make sure you are included in the team membership. If not, add
yourself at the Additional Membership tab under Membership Policy and then retry.

Email: Your @amazon email address

AWS Account Section

If you have an existing AWS Account (for example a Burner account) that you want to
use, disable the Create an AWS account toggle and enter the AWS Account ID. If you
want to create a brand new AWS Account just for this lab, you can leave the toggle
enabled and select Individual for the account type.

Bindles Section

Disable Create a Bindle with the required permissions and under Select a Bindle
search for PersonalSoftwareBindle. There should be one with your username which you
can use to manage permissions for all your personal projects.

Deployment, Application Options, and Packages Sections

We recommend leaving all these settings alone, unless you have a good reason to
change them.

Once all the sections are filled out, click Next. If this is your first time using
BuilderHub Create with a personal bindle, you may get a warning like Bindle does
not grant Can Manage permissions to the service AWS Account. Just click the Grant
permissions link under the warning to add these permissions to your Bindle and then
come back and click Next again.
Confirm the settings and click Create!.

After around 50 minutes, you will receive an email that your application was
created. Get a hot drink and meditate instead of staring at the screen . Or, you
can check out one of the modules in our SDE Foundations course , if you’re new to
Amazon, or one of the other learning resources in the Engineering Excellence
catalog .

Verify
Your new Lambda is a simple calculator which adds two numbers. Via your pipeline,
it has been deployed to the us-west-2 Oregon AWS Region. Let’s test it out!

First, get programmatic access to the AWS account you used by using the ADA CLI:

aws lambda --region us-west-2 invoke --function-name CdkWorkshopUsername \


--payload '{"x":"2","y":"3"}' \
--cli-binary-format raw-in-base64-out /dev/stdout
You should see 5.0 in the body of the response.

If you get an error such as Unknown options: --cli-binary-format, /dev/stdout you


may need to update your AWS CLI version. In this case, use the following command to
discover your AWS CLI version:

aws --version
If the returned version for aws-cli is 1.x.xx, see AWS CLI version 2 migration
instructions in the AWS Command Line Interface User Guide for Version 2 to upgrade
to version 2.

You can use the same command and remove the --cli-binary-format raw-in-base64-out
flag to unblock yourself for now.

Now, while you may not be very mind-blown by this new capability, we’ll next look
at how this Lambda can be plugged-in to a nearly infinite number of possibilities
using AWS services and CDK.

So, what happened so far?

The CDK package that BuilderHub Create provisioned for you has created a full CI/CD
setup (we call it a Pipeline). Open the following URL to see your pipeline:

https://pipelines.amazon.com/pipelines/CdkWorkshopUsername

This CDK pipeline has deployed your initial calculator Lambda code using
CloudFormation Stacks. You can view your code using the following link:

https://code.amazon.com/packages/CdkWorkshopUsername/trees/mainline
In the CloudFormation world, a stack is a collection of AWS resources managed
together. You can create CloudFormation stacks to define AWS resources using YAML
or JSON,or you can use CDK to define the Stacks programmatically, which is what
we’ll be exploring in this workshop.

Check out your CDK package by opening the following URL:

https://code.amazon.com/packages/CdkWorkshopUsernameCDK/trees/mainline
If you expand the lib folder, you’ll see three files: app.ts, serviceStack.ts, and
monitoringStack.ts which were created for you by BuilderHub Create. The app.ts file
is the main entry point for CDK, which defines all the infrastructure the package
will create, while the other two files define CloudFormation stacks to be deployed
by the CDK. Feel free to explore this boilerplate code to familiarize yourself with
it.

Here’s a diagram of all the Stacks you have so far and the resources which exist in
each one. In the next part of this workshop, you will learn what goes into writing
a stack as you will create another stack from scratch yourself, stay tuned!

Get the initial codebase


Follow the link in the email from Builder Tools (or find your application on
BuilderHub Create at My Applications ) and execute the brazil commands you see
there. This will clone 3 packages that are part of your application to your laptop
or Cloud Desktop.

The command from BuilderHub is similar to this. Replace Username with your Amazon
username.

brazil ws create --name CdkWorkshopUsername --root ~/workplace/CdkWorkshopUsername

cd ~/workplace/CdkWorkshopUsername/

brazil ws use \
-vs CdkWorkshopUsername/development \
-p CdkWorkshopUsernameCDK \
-p CdkWorkshopUsername \
-p CdkWorkshopUsernameTests \
--platform AL2_x86_64

cd src/CdkWorkshopUsernameCDK
Build
From within the CdkWorkshopUsernameCDK directory, run the following command to
build all of the packages:

brazil-recursive-cmd --allPackages brazil-build


Note
If you run into error Directory not in workspace while running the build command,
make sure you are in the CdkWorkshopUsernameCDK directory and then run the command.

Note
If you run into ERROR: You have no preference setting for {Node {VERSION}.x. | Java
{VERSION}}, try brazil setup --{node | java} then cycle through the versions by
repeatedly pressing Enter and typing in the home directories under the correct
node/java versions (usually /opt/homebrew/opt/node@VERSION/bin/node and
/Library/Java/JavaVirtualMachines/amazon-corretto-VERSION.jdk/Contents/Home). If
you made a mistake, use brazil prefs --delete --force --global --key
{cli.bin.node18x} or {cli.bin.java14_home} to delete the pref settings. If you are
using nvm (node version manager) then you can locate node versions (after they’re
installed, e.g., nvm install 14) with the command nvm which XX replacing XX with
the version (ex. 14, 16, 18) and use the outputted file path for brazil setup --
node inputs. Here is a helpful answer in Sage .

Note
If you run into Task: coverageReport FAILED, try installing Java 11 from Self
Service and then run brazil ws clean before attempting to install the packages
again.

Note
If you run into The security token included in the request is expired Error: Failed
to run CDK CLI despite being authenticated, try running brazil ws clean && rm
~/.aws/credentials && touch ~/.aws/credentials. For more information. see this
answer in Sage .

Note
If you run into Unsupported class file major version 62 or this error:

error: cannot access org.mockito.Mockito


import org.mockito.Mockito;
^
bad class file: /Volumes/workplace/<app>/env/gradle-cache-2/org/mockito/mockito-
core/5.0.0/mockito-core-5.0.0.jar(org/mockito/Mockito.class)
class file has wrong version 55.0, should be 52.0
Please remove or make sure it appears in the correct subdirectory of the
classpath.
Download Amazon Corretto 11, follow this Sage response

By the end you should see something like:

CDK Synth complete


Checking CloudFormation bats config. File: Packaging/CloudFormation.yml
Wrote CloudFormationPublisher bats config
BUILD SUCCEEDED
CDK has just compiled all the existing source code into CloudFormation templates.
If you want to see what it created, first go to CdkWorkshopUsernameCDK package
directory by running cd src/CdkWorkshopUsernameCDK/ then you can run brazil-build
cdk ls. You should see a list of all the CloudFormation stacks ready for
deployment:

BONESBootstrap-111222333-111222333444555-us-west-2`
Pipeline/PDGScaffoldingStack
Pipeline/PipelineDeploymentStack
CdkWorkshop-Service-alpha
CdkWorkshopUsername-Monitoring-alpha
You’ll notice your Service and Monitoring stacks we saw earlier are there, as well
as your pipeline and some bootstrap resources. Nice!

Step 2: Add Infrastructure (Dynamo DB table and an SNS topic)


As you saw earlier, you already have service and monitoring stacks. In general,
it’s good practice to separate your AWS infrastructure into stacks and use them as
your deployment units. This helps produce smaller changes, quicker deployment
times, and less coupling which can become cumbersome in larger services.

In this step, we will be transforming our simple calculator Lambda into a system
which spans multiple AWS services for even greater functionality. To deploy this
cleanly without disrupting anything we already have working, we’ll create it in a
new stack that we’ll call infraStack.

Create separate infra stack


In CDK, you create new stacks as new files in your package. Create a new file
called CdkWorkshopUsernameCDK/lib/infraStack.ts, right alongside the existing
serviceStack.ts and monitoringStack.ts files and populate it with the code below.
You can use vim, nano, or whatever editor you’re comfortable with to do this. Don’t
forget to update the email address at the bottom of the file with your Amazon email
address (or else it will lead to issues while deploying later).

Tip
If you use vim, :set paste can be helpful to maintain formatting when copying
this code into your terminal. You can also use gg=G to format your document.

import { DeploymentStack, DeploymentStackProps } from "@amzn/pipelines";


import { AttributeType, Table } from "aws-cdk-lib/aws-dynamodb";
import { Subscription, SubscriptionProtocol, Topic} from "aws-cdk-lib/aws-sns";
import { Construct } from 'constructs';

export class InfraStack extends DeploymentStack {

public readonly table: Table;


public readonly topic: Topic;

constructor(scope: Construct, id: string, props: DeploymentStackProps) {


super(scope, id, props);

// DynamoDB Setup
this.table = new Table(this, 'Table', {
partitionKey: { name: 'id', type: AttributeType.STRING }
});

// SNS Topic
this.topic = new Topic(this, 'Topic', {
displayName: 'VerificationTopic'
});

// Subscribe to Email
const subscription = new Subscription(this, 'EmailSubscription', {
topic: this.topic,
protocol: SubscriptionProtocol.EMAIL,
endpoint: '[email protected]' // replace with your email address
});
}
}
Initialize the infra stack
Now, just like any programming project, creating a new file and writing some code
in it doesn’t actually do anything on its own. As we mentioned before, the app.ts
file is the entry point for CDK to build all the stacks you’ve defined, so we need
to add a bit of code to that file to wire up our new infra stack. Let’s make the
following changes to lib/app.ts:

Add SoftwareType to the list of @amzn/pipelines imports at the top of the file:

import {
DeploymentPipeline,
GordianKnotScannerApprovalWorkflowStep,
Platform,
ScanProfile,
SoftwareType,
} from '@amzn/pipelines';
Import the new stack at the top of the file, just below where MonitoringStack was
imported:

// import the new stack at the top of the file, just below where MonitoringStack
was imported
import { InfraStack } from './infraStack';
This pulls in the new code we just wrote for us to use it in our application.

Put this code block near the middle of the file (be sure to replace Username with
your own user name), before the line which initializes serviceStack (const
serviceStack = new ServiceStack...):

const deploymentProps = {
env: env,
softwareType: SoftwareType.INFRASTRUCTURE,
};
const infraStack = new InfraStack(app, `CdkWorkshopUsername-Infra-${stageName}`,
deploymentProps);
By initializing the InfraStack as a Typescript object, compiling the code will now
result in the InfraStack getting generated into your CloudFormation templates.

Find the line of code which looks like this:


deploymentGroup.addStacks(serviceStack, monitoringStack);. Replace it with:

deploymentGroup.addStacks(infraStack, serviceStack, monitoringStack);


This adds the InfraStack to our CI/CD pipeline to be constantly deployed to and
kept up to date.

Save your changes to the file.

Build, check, and diff


With any luck, this has fully wired up our new Infrastructure (in the form of a
DynamoDB table and SNS topic) to be generated and managed for us by CDK. Let’s
recompile the CDK:

brazil-build clean && brazil-build


This command should succeed if you’ve made all the code changes correctly. If you
get an error, check over your work (you can use git diff to show your changes) to
make sure you didn’t mistype anything. Ensure you are recompiling in the
CdkWorkshopUsernameCDK directory. You can check this commit to see all of the
changes that are needed at this step and compare with your own code.

Once done, we can re-run the command from before to show all the CloudFormation
Stacks available from our CDK code:

brazil-build cdk list


In the list, you should now see the new stack you’ve just wired up:

BONESBootstrap-111222333-111222333444555-us-west-2
Pipeline/PDGScaffoldingStack
Pipeline/PipelineDeploymentStack
CdkWorkshopUsername-Infra-alpha
CdkWorkshopUsername-Service-alpha
CdkWorkshopUsername-Monitoring-alpha
If you want to take it one step further, you can use CDK diffing functionality to
do a comparison of your changes against what’s currently deployed:

brazil-build cdk diff


Finding CdkWorkshopUsername-Infra-alpha in the output, you should see all the new
resources this stack is destined to create which don’t currently exist.

Deploy your changes


Let’s deploy this brand new infrastructure. Since we added this stack to the
Pipeline’s deployment group in the previous step, one way to do this would just be
to commit your changes and push them to mainline. This will cause the pipeline to
start recompiling CDK (just like we did locally, but this time via Package
Builder), and automatically deploy the results into your AWS account.

But, pushing to mainline without some prior validation that your changes work is
usually a recipe for failures, rollbacks, and reverts. Instead, we’ll deploy this
new stack first manually to verify it does what we want. To do this outside of the
pipeline, we’ll need CDK to set up a few extra resources for us by running the
following command to bootstrap our AWS account:

brazil-build cdk bootstrap


If you see an error like The security token included in the request is expired ,
check out this Sage post . You can read more about what bootstrapping does for you
on the public CDK documentation . Now that we’re set up, we can use the cdk deploy
command to deploy any of the stacks our CDK code defines:

brazil-build cdk deploy CdkWorkshopUsername-Infra-alpha


If you see this error:

CdkWorkshopUsername-Infra-alpha failed: Error [ValidationError]: Stack


[CdkWorkshopUsername-Infra-alpha] cannot be deleted while TerminationProtection
is enabled.
perform the following steps:

Open the AWS console, search for CloudFormation, and change the console region to
US West (Oregon).

Find the stack named CdkWorkshopUsername-Infra-alpha, select Stack Actions ‣ Edit


Termination Protection, and disable termination protection.

Double-check your infraStack.ts file to ensure that you have replaced all Username
strings with your actual username. If you modify the file, re-run brazil-build to
rebuild it.

Verify and push


If all went well, then your new CloudFormation infra stack should be successfully
deployed and you should have gotten an email from AWS SNS to confirm a
subscription. You can go ahead and click the Confirm subscription link from within
the email. Nice work!

Now we know it’s safe to commit and push your infrastructure changes remotely. You
can also create a CR to have the changes reviewed before pushing the changes:

git add lib


git commit -am "Add infra stack"
git push
After your changes are approved and merged, you can head back to your pipeline to
watch them deploy to ensure it does so successfully. However, in this case, the
deployment should be effectively a no-op since we’ve already deployed the new stack
manually.

Step 3: Add Step Functions


This is the fun part. We are going to use Step Functions to build a complex
serverless workflow which makes use of our calculator function. We’ll then play
around with triggering that workflow from the AWS Console and observing the path
that the workflow took through the step function’s states.

Our step function will be making decisions based on the user’s input and the data
stored in DynamoDB to implement a caching mechanism for our Lambda function. After
all, math doesn’t change all that often, so we don’t need to make our Lambda do all
the incredibly hard work to add the same two numbers together over and over again.

The image above shows the State Machine Graph inspector which is part of the AWS
Console. We’ll use it to test and troubleshoot the serverless state machine.
Add state machine
Just as before, we’ll add this new step function as a new CloudFormation stack.
Create a new file stepFunctionStack.ts alongside your other stacks in the lib/
directory, and populate it with the following code:

import { DeploymentStack, DeploymentStackProps } from "@amzn/pipelines";


import { DynamoAttributeValue, DynamoGetItem, DynamoPutItem, LambdaInvoke,
SnsPublish } from "aws-cdk-lib/aws-stepfunctions-tasks";
import { Chain, DISCARD, JsonPath, StateMachine, Map, TaskInput, Choice, Condition,
Pass } from "aws-cdk-lib/aws-stepfunctions";
import { Topic } from "aws-cdk-lib/aws-sns";
import { Table } from "aws-cdk-lib/aws-dynamodb";
import { IFunction } from 'aws-cdk-lib/aws-lambda';
import { Construct } from 'constructs';

interface StepFunctionStackProps extends DeploymentStackProps {


readonly stage: string,
lambdaFunction: IFunction,
snsTopic: Topic,
table: Table,
}

export class StepFunctionStack extends DeploymentStack {


constructor(scope: Construct, id: string, props: StepFunctionStackProps) {
super(scope, id, props);

const lookupState = new DynamoGetItem(this, 'lookupState', {


table: props.table,
key: {
id:
DynamoAttributeValue.fromString(JsonPath.stringAt("States.Format('{}+{}',
$.number1, $.number2)"))
},
resultPath: '$.storedValue'
});

const calculateState = new LambdaInvoke(this, "calculateState", {


lambdaFunction: props.lambdaFunction,
payload: TaskInput.fromObject({
"x.$" : "$.number1",
"y.$" : "$.number2",
}),
resultPath: "$.calculatedValue"
});

const snsState = new SnsPublish(this, 'sendMessageState', {


message: TaskInput.fromText(
JsonPath.stringAt("States.Format('Ding! Your calculation is ready:\
n\n{} + {} = {}\n\n{}\nThanks for your business, come again soon!', $.number1,
$.number2, $.result.sum, $.result.message)")
),
topic: props.snsTopic,
resultPath: '$.sendMessageResult'
});

const saveInDynamoDbState = new DynamoPutItem(this, 'saveInDynamoDbState', {


table: props.table,
item: {
id:
DynamoAttributeValue.fromString(JsonPath.stringAt("States.Format('{}+{}',
$.number1, $.number2)")),
result:
DynamoAttributeValue.fromString(JsonPath.stringAt('$.calculatedValue.Payload.body')
),
createdTime: DynamoAttributeValue.fromString(JsonPath.stringAt('$
$.State.EnteredTime'))
},
resultPath: JsonPath.DISCARD,
});

const passStoredValueState = new Pass(this, 'passStoredValueState', {


parameters: {
"sum": JsonPath.stringAt("$.storedValue.Item.result.S"),
"message": JsonPath.stringAt("States.Format('This result was pre-
computed at {}', $.storedValue.Item.createdTime.S)")
},
resultPath: "$.result",
});

const passCalculatedValueState = new Pass(this, 'passCalculatedValueState', {


parameters: {
"sum": JsonPath.stringAt("$.calculatedValue.Payload.body"),
"message": JsonPath.stringAt("States.Format('This result was freshly
computed just for you at {}', $$.State.EnteredTime)")
},
resultPath: "$.result",
});

const checkState = new Choice(this, 'checkState')


.when(Condition.isPresent('$.storedValue.Item'),
passStoredValueState.next(snsState))
.otherwise(calculateState);

const map = new Map(this, "Map State", {


itemsPath: '$.request',
maxConcurrency: 10
});

const calculatorWorkflow: Chain =


calculateState.next(saveInDynamoDbState).next(passCalculatedValueState).next(snsSta
te);
const checkAndCalculateWorkflow: Chain = lookupState.next(checkState);

map.iterator(checkAndCalculateWorkflow);

const stateMachine = new StateMachine(this, 'StateMachine', {


definition: map,
stateMachineName: 'CalculatorWorkflowStateMachine'
});
}
}
Save and close the file.

Just as before, we’ll need to wire up this stack to our main application to
register it for creation and deployment in our pipeline. This time, though, we’ll
be using some outputs of our previously-created infra and service stacks as
properties (ie. inputs) for our step function stack, so that we reference those
resources in our workflow. In app.ts, import your new stack at the top:
import { StepFunctionStack } from "./stepFunctionStack";
And then initialize your new stack like so (be sure to replace Username with your
own user name):

const stepFunctionStack = new StepFunctionStack(app, `CdkWorkshopUsername-


StepFunction-${stageName}`, {
...deploymentProps,
lambdaFunction: serviceStack.lambdaFunction,
snsTopic: infraStack.topic,
table: infraStack.table,
stage: stageName
});
Since we’re using properties of both serviceStack and infraStack, you’ll want to
put this initialization code below both of those stacks’ initialization, otherwise
you’ll get an error like Block-scoped variable 'serviceStack' used before its
declaration while building.

By the way, those three dots in ...deploymentProps, aren’t a typo; you should
really include them in your code. That’s the JavaScript Spread syntax.

And, finally, just like before you’ll want to add stepFunctionStack to the
deploymentGroup stack list so that it gets updated by your pipeline:

deploymentGroup.addStacks(infraStack, serviceStack, monitoringStack,


stepFunctionStack);
Build and deploy
Let’s deploy this step function to AWS:

brazil-build
brazil-build cdk deploy CdkWorkshopUsername-StepFunction-alpha --require-approval
never
We’re using --require-approval never in this command to avoid CDK prompting us to
confirm changes to the IAM Role’s policy statements, which it does for us
automatically with this change to allow AWS Step Functions to invoke your Lambda,
send SNS messages, and access DynamoDB. If you’re curious what this looks like,
just omit the flag and see what CDK does when you deploy.

Note
If you get the error The security token included in the request is expired, try to
rerun:

ada credentials update --account=Account_ID --provider=conduit --


role=IibsAdminAccess-DO-NOT-DELETE --once
If you encounter any issues building or deploying, re-check your code and make sure
you’ve imported your step function stack into app.ts. You can see this commit for
all the changes necessary at this step.

If you get the error [100%] fail: spawn bats ENOENT while deploying, be sure that
the BATS CLI is installed on your local system with the following command: toolbox
install batscli

If you get an error during cdk deploy about Zipping this asset produced an empty
zip file..., you can go to the CdkWorkshopUsername package and run brazil-build
release to generate a zip file. Or, refer to this Sage post to use BATS to
manually transform your Lambda. Afterwards, rerun the cdk deploy command.

After your stack successfully deploys, you should get an AWS Notification -
Subscription Confirmation email asking you to confirm that you want to subscribe to
the SNS notification topic that you just created. Click Confirm subscription to
finish subscribing to the topic so that you receive notifications in the next
section! If you didn’t receive any emails, double check that your email address is
correct in infraStack.ts; if not, fix that and redeploy your stacks.

Verify
It’s time to test our Step Function! Just like our Lambda, we can invoke this
programmatically using the AWS CLI. However, a much richer experience for
visualizing, debugging, and understanding Step Functions is to invoke them using
the browser-based AWS Console.

If you’re using a Burner Account, you’ll want to go to


https://iad.merlon.amazon.dev/burner-accounts and click Console Access to get to
the AWS Console.

If you’re using an Isengard account, go to the Console Access page and click on
the Admin role link.

To verify the step function:

From the AWS Console, use the region dropdown in the upper right-hand corner to
select US West (Oregon) us-west-2.

Use the Search field to find and select the Step Functions service.

From the list of State Machines, click your State Machine.

Click Start execution.

Paste the array below:

{
"request":[
{
"number1":"5",
"number2":"6"
},
{
"number1":"5",
"number2":"3"
}
]
}
click Start execution again.

After a few seconds the State Machine should turn green. You should also see 2
emails (one for each invocation) with the output of your computations!

Note
If you didn’t receive any emails, double check that your email address is correct
in infraStack.ts. Also verify that you have confirmed your subscription to the SNS
topic. Go to the first email you received from SNS (it should be titled AWS
Notification - Subscription Confirmation) and click the confirmation link within
it. Then re-run your step function.

So, what happened here?

We just used Step Functions to automate several decision-making and integration


processes to yield a result. You’ll notice that from your invocation, all of the
steps in the step function are green, except for passStoredValueState:

This is because for this invocation of the Step Function, we had no stored value
for this computation in DynamoDB. As a result, the checkState step at the top chose
the left-most path through the state machine, which invokes the Lambda function to
compute the value, and then stores that value in DynamoDB before sending it to you
via SNS.

Hint
Try checking out the DynamoDB console to see what data got added to your Table as a
result of this execution!

If you’re seeing where this is going, this means that the state of the step
function should be different the next time we invoke it with the same value. Go
ahead, give it a try! If you start another execution and paste the same value as
before, you’ll see a different path taken through the step function:

You should also notice that this execution goes much faster than the one before. As
it turns out, retrieving an item from DynamoDB is orders of magnitude quicker (and
cheaper) than invoking a Lambda function and waiting for its completion. This
allows us to effectively use DynamoDB as a cache to avoid expensive (both in time
and in dollars) Lambda invocations.

Try out some more inputs to see what happens! You can also click on each state in
the step function diagram to see the input and the output of each state, and follow
the execution along from the beginning to end. If you play around with the inputs,
you may discover a couple shortcomings of this state machine. If you’re up for a
challenge, see if you can modify the step function code to deal with one or more of
the following issues:

Try passing in a non-number value in your input. What happens?

Try passing in input which does not have the number1 or number2 fields at all. What
happens? Could we improve this in a way that lets the customer know they’ve done
something wrong and how to correct it?

Try passing in the reverse of your input. For example if you cached the result of 5
+ 6 before, try 6 + 5. Is the result still cached? How could we modify the DynamoDB
storage and lookup to fix this?

Tip
When modifying or debugging a step function, you can use the AWS Console to make
small changes until it works, instead of deploying the stack for each small change.

Step functions are a fantastic entry into the possibilities of building on AWS, and
there’s much more that can be done with them than what we covered in this lab. Here
are some resources if you want to dive deeper:

Step Function CDK Reference

Step Functions Integrations

A fantastic talk from Chris Munns, Principal Developer Advocate for Serverless, on
adding business logic to step functions: https://www.youtube.com/watch?
v=c797gM0f_Pc

Example to look at for Step Functions inspiration: AudibleProductTranscriptionCDK


Push changes
When you’re satisfied with your step function changes, commit and push your
changes:

git add lib


git commit -am "Add step functions stack"
git push
Great work! You’re well on your way to becoming a CDK Wizard 🧙.

Next
Day 2: Secure your app

Day 2: Secure your app


What will you have at the end of this day?

You will have enhanced your app from Day 1 to be more secure, using best practices
of Amazon InfoSec.

What will you learn?

Amazon’s culture of security, and what mechanisms we use to build secure


applications, with a focus on motecdk.

Introduction
Here at Amazon, our job isn’t just adding pairs of numbers together and emailing
ourselves the sum. When that pair of numbers is data that our customers have
trusted us with, we have to think about how we process, store, and access that data
securely in order to maintain our customer’s trust.

Our culture of Security as Job Zero has driven us to invent mechanisms for
building, monitoring, and continuously re-evaluating our systems’ security.

Mote is an internal library of secure-by-default AWS CDK constructs that you can
add as a dependency to your cdk resources and sleep peacefully at night. Mote CDK
constructs extend standard cdk constructs and can be used interchangeably. It’s our
recommendation that you always prefer Mote constructs wherever available over
standard cdk constructs.

Note
Mote is developed for CDO (i.e., non-AWS) use cases, since it complies with that
organization’s data handling standards. In the future, Mote may be extended by AWS
security to include the data handling standards of AWS, which differ slightly. If
you’re in AWS, just be aware that we’re using Mote here as an example of how to
think about and use available tooling to create secure-by-default applications.

Refer to the Mote Workshop to learn more about Mote. Mote constructs are now
distributed via AWS CodeArtifact (aka no need to add an extra Brazil package
dependency!), you can also read about it from Goshawk .

Add Mote as a dependency


Update the package.json file to add motecdk as a dependency of our application.

Note
The versions of the aws-cdk-lib and @amzn/motecdk must be compatible with each
other. Use the exact versions of both packages shown below or else you may get
compilation errors.

The versions used in the example were accurate as of Mar 2024.


"dependencies": {
...
"aws-cdk-lib": "2.130.0",
"@amzn/motecdk": "~27.0.0",
...
}
Install the dependencies and build:

brazil-build app install


brazil-build clean && brazil-build
Use Mote in your code
In infraStack.ts, import the following resources at the top:

import { RemovalPolicy } from "aws-cdk-lib";


import { TableEncryption, BillingMode } from "aws-cdk-lib/aws-dynamodb";
import { SecureTable } from "@amzn/motecdk/mote-dynamodb";
import { Exempt } from "@amzn/motecdk/mote-core";
import { SecureSubscription, SecureTopic } from "@amzn/motecdk/mote-sns";
Update the table and topic to use the secure versions:

export class InfraStack extends DeploymentStack {

public readonly table: SecureTable;


public readonly topic: SecureTopic;
Replace the existing DynamoDB and SNS constructs with the secure versions:

// DynamoDB Setup
this.table = new SecureTable(this, 'Table', {
partitionKey: { name: 'id', type: AttributeType.STRING },
encryption: Exempt(TableEncryption.AWS_MANAGED),
pointInTimeRecovery: true,
billingMode: BillingMode.PAY_PER_REQUEST,
removalPolicy: RemovalPolicy.DESTROY,
});

// SNS Topic
this.topic = new SecureTopic(this, 'Topic', {
masterKey: Exempt('KeyLess'),
displayName: 'VerificationTopic'
});

// Subscribe to Email
const subscription = new SecureSubscription(this, 'EmailSubscription', {
topic: this.topic,
protocol: Exempt(SubscriptionProtocol.EMAIL),
endpoint: '[email protected]' // change alias
});
In serviceStack.ts, we’ll also be making a handful of changes to the resources to
secure them. Look at the commit for CdkWorkshopEricnCDK for all the relevant
changes.

Build and verify


Build to verify your changes:

brazil-build
If you encounter any issues while building, double-check that your motecdk and aws-
cdk-lib versions are compatible with each other.
Just swapping out the default CDK constructs to the Mote secure constructs in your
code will have yielded a large number of changes to your resources to configure
them using best practices for secure usage. Take a look at all the differences by
running:

brazil-build cdk diff


Using secure constructs from Mote instead of the regular CDK constructs, some
optional parameters are made required (for example, the encryption parameter of our
DynamoDB table). This forces us to make a decision about what the value should be
instead of silently accepting the default value, or perhaps not even being aware
that different options are available. In your diff you should be able to see all of
the following changes made to your DynamoDB table:

We’ve enabled Server-Side Encryption (SSE) on the table, which protects our data-
at-rest while being stored in the database.

We’ve enabled Point In Time Recovery (PITR) on the table, which allows us to
rollback the table to any point in time, which helps protect against data loss due
to corruption or mistaken deletions.

We’ve changed our table’s capacity from Provisioned to On-Demand, which helps
protect against availability issues due to unexpected scaling.

Since none of these options are the default for the Table construct, using Mote
helps guide us to discover and address these sometimes extremely critical but
easily-overlooked settings for production systems.

Commit and push


Since we’ve made many small changes across almost all of our stacks, but haven’t
made any major, meaningful changes to the behavior of our code or infrastructure,
we can depend more so on CDK and Typescript’s compile-time validation than before
to assume that our changes will successfully deploy. Therefore, instead of running
brazil-build cdk deploy ... to deploy each of these stacks’ changes one-by-one,
we’ll simply push our changes and allow our pipeline to deploy them for us:

git add .
git commit -m "Secure stacks using mote"
git push
Once you merge your changes, have a look at your pipeline and track the progress
until it successfully deploys to alpha. Now you’ve got a secure application running
on AWS!

Next
Day 3: Create a Development Stack

Day 3: Create a Development Stack


What will you have at the end of this day? You will refactor your code in such a
way that will allow you to make quicker, safer changes to your infrastructure and
test them without interrupting the live application. You will be able to make code
changes that will bypass your pipeline and will update a Cloud Formation Stack that
only you can modify.

What will you learn? What a personal bootstrap stack is, how to create one, and how
to use it.

Introduction
Remember this statement we made earlier about how to deploy your code?
Pushing to mainline without some prior validation that your changes work is usually
a recipe for failures, rollbacks, and reverts. Instead, we’ll deploy this new stack
first manually to verify it does what we want.

Your reaction to that may have been “okay.. but.. deploying manually is better?” 🤨

If you were a bit skeptical about that, you’re absolutely correct. We took the step
earlier to update the application manually from our code changes without receiving
CR approval or merging the changes to mainline because this is a toy application in
our personal account which nobody else is depending on. But as you may have
guessed, this is not best practice for applications in production or which have
customers which may be impacted by a bad code change deployed manually. In this
case, getting a pipeline failure or even a rollback is really the much better
outcome for our customers than impacting them by deploying un-reviewed code
manually.

Additionally, in this workshop, you’re the only one working on this code package.
When there are multiple contributors making changes, every engineer deploying
manually to a shared environment may result in chaos. We need a better, safer way
to test our changes out without touching the live application. A Personal Bootstrap
Stack can help us achieve this, and is what we’ll be diving into in this lab.

The big idea here is to give you a way of testing your infrastructure end-to-end
before you create a CR and have it merged and deployed via pipelines, and leaving
the pipeline — and the pipeline only — in charge of updating the production stacks.
Essentially, we’ll be cloning our existing application into a new completely
separate stage which does not conflict with or overlap with the alpha version
already deployed into our account.

Step 1: Create the application


As you already know, the app.ts file is the entry point to your CDK application.
However, what you may not yet know is that a CDK package can contain multiple
applications which have their own entry points to define different modalities.

We’ll use this functionality to create a new CDK application which has personal
development capabilities.

To start, let’s first make a copy of our existing app.ts to a new file by running
the following command from within our CdkWorkshopUsernameCDK package:

cp lib/app.ts lib/devApp.ts
Note
If you get an error like cp: cannot stat ‘app.ts’: No such file or directory, make
sure you’re in the top-level folder of CdkWorkshopUsernameCDK (rather than lib/).
Running pwd can show you where you currently are if you’re not sure.

Now, we’ll make a few modifications to our devApp.ts to replace the pipeline
deployment environment with a personal BootstrapStack deployment environment. Refer
to CdkWorkshopEricnCDK for all the changes we’ll make to this file, and below is
what our modified devApp.ts file should look like. If you copy and paste from here,
don’t forget to replace the AWS account number, and also replace the
CdkWorkshopUsername references with your user name (but leave the other ${user}
references, as they are meant to be replaced at build time).

#!/usr/bin/env node
import { App } from 'aws-cdk-lib';
import { BootstrapStack, SoftwareType } from '@amzn/pipelines';
import { ServiceStack } from './serviceStack';
import { MonitoringStack } from './monitoringStack';
import { InfraStack } from './infraStack';
import { StepFunctionStack } from './stepFunctionStack';

// Set up your CDK App


const app = new App();

const applicationAccount = 'YOUR ACCOUNT ID HERE (same one as in app.ts)';


// Using the same account ID as the main application, but this could be different,
for instance your personal AWS account
const devAccount = applicationAccount;
const user: string = process.env.USER!;

const bootstrapStack = BootstrapStack.personalBootstrap(app, {


account: devAccount,
region: 'us-west-2',
disambiguator: user,
});

const env = bootstrapStack.deploymentEnvironment; // using bootstrap's


deploymentEnvironment instead of the pipeline's environment
const stageName = 'dev'; // stage name changed from alpha to dev

const deploymentProps = {
env,
softwareType: SoftwareType.INFRASTRUCTURE,
stage: stageName,
isProd: false,
disambiguator: user,
};

// Add "-${user}" to stack names for dev stack


const infraStack = new InfraStack(app, `CdkWorkshopUsername-Infra-${stageName}-$
{user}`, deploymentProps);
const serviceStack = new ServiceStack(app, `CdkWorkshopUsername-Service-$
{stageName}-${user}`, deploymentProps);
const monitoringStack = new MonitoringStack(app, `CdkWorkshopUsername-Monitoring-$
{stageName}-${user}`, {
...deploymentProps,
lambdaFunction: serviceStack.lambdaFunction,
});

const stepFunctionStack = new StepFunctionStack(app, `CdkWorkshopUsername-


StepFunction-${stageName}-${user}`, {
...deploymentProps,
lambdaFunction: serviceStack.lambdaFunction,
snsTopic: infraStack.topic,
table: infraStack.table,
});
With these changes, we’ve essentially transformed this from a regular, pipeline-
driven application to a personalized app, but still using the same stack
definitions as we had before. In order to switch between the regular app.ts and
this new personal development app when running commands from here on, we’ll add a
couple helper commands (see CdkWorkshopEricnCDK to our package.json:

"scripts": {
...(existing scripts here)...
"cdk:dev": "brazil-build cdk --require-approval never -a 'npx ts-node
lib/devApp.ts'",
"bootstrap:dev": "npm run cdk:dev deploy BONESBootstrap-*",
"deploy:dev": "npm run cdk:dev deploy *-${USER}"
},
Note
When we use a build system (such as NPM, which we’re using here or Gradle, which
your Java Lambda is using) in the Brazil ecosystem, running brazil-build (either
alone or with additional arguments) actually invokes a build target in the
underlying build system, which you can define and customize as we are doing here.
To run a custom script in the case of the NPM build system, we need to run it with
brazil-build app script-name.

For more information about build targets in Brazil, see Build Targets.

As you can see, the cdk:dev build target overrides the application to be used from
the regular lib/app.ts to our new lib/devApp.ts. This means that we should be able
to run the following command to see our new application’s output:

brazil-build app cdk:dev ls


Note
If you get:

error TS2322: Type 'string | undefined' is not assignable to type 'string'. Type
'undefined' is not assignable to type 'string',
const user: string = process.env.USER;
Try running: kinit && mwinit

We should get:

BONESBootstrap-username-11122223334-us-west-2
CdkWorkshopUsername-Infra-dev-username
CdkWorkshopUsername-Service-dev-username
CdkWorkshopUsername-Monitoring-dev-username
CdkWorkshopUsername-StepFunction-dev-username
Sweet! It looks like we’ve successfully changed our application code to create new
stacks under the dev stage that are exclusive to ourselves, as indicated by the -
dev-username suffix.

If you get stuck or have issues building, see the CdkWorkshopEricnCDK example to
see all the changes you should have made in this section.

Step 2: Bootstrap and deploy


As before, we’ll need to bootstrap our new deployment environment prior to
deploying manual changes by running:

brazil-build app bootstrap:dev


Note
If you get an error while executing this step like:

Bucket cannot have ACLs set with ObjectOwnership's BucketOwnerEnforced setting \\


(Service: Amazon S3; Status Code: 400; Error Code:
InvalidBucketAclWithObjectOwnership; Request ID
Then please upgrade your @amzn/pipelines dependency in package.json to ^4.0.50 to
comply with the new S3 default security settings (thanks to prjaga@ and longfeiw@
for this solution).

Let’s now try to deploy these dev stacks by running:

brazil-build app deploy:dev


After a while, we should get a failure similar to:
CREATE_FAILED | AWS::Lambda::Function | CdkWorkshopUsernameABCDEFG123
CdkWorkshopUsername \\
already exists in stack
arn:aws:cloudformation:us-west-2:588073197317:stack/CdkWorkshopUsername-Service-
alpha/702cbd30-227d-11ed-afd5-0a4e9132cc99
Hmm, what happened here? Before panicking, let’s have a look at the error message
and see if we can sort out what went wrong here. The error message we got is saying
a resource of type AWS::Lambda::Function with the name CdkWorkshopUsername already
exists in our alpha stack from earlier.

So why would that be the case? Wasn’t the whole point of this lab to create a
completely separate stack for ourselves which didn’t overlap at all with the
pipeline’s resources?

As it turns out, some AWS resources enforce uniqueness of resource names and do not
allow duplicates to exist in the same account at the same time. Usually this is the
case when the name of the resource is one of its key identifying characteristics,
rather than say, a randomly-generated ID. Some examples of this are:

Amazon S3 - S3 Bucket resources (CDK: Bucket , Cfn: AWS::S3::Bucket ) are globally


unique. Only one AWS account can have a single bucket with a given name.

AWS IAM - IAM User and Role resources (CDK: User / Role , Cfn: AWS::IAM::User /
AWS::IAM::Role ) are account unique. Every AWS account can have a User or Role with
the same name, but can only ever have one of these resources across all regions the
AWS account uses.

Amazon Lambda - Lambda Function resources (CDK: Function , Cfn:


AWS::Lambda::Function ) are account-and-region unique. Every AWS account can have a
Function with the same name, but cannot have two Functions with the same name
within the same AWS Region . Duplicate Function names are allowed in different
regions (for instance, creating a Function named MyFunction in both us-east-1 / IAD
and us-west-2 / PDX is allowed).

Since Lambda Functions are one of the resources whose name must be unique within an
account and region, our current strategy of hardcoding the function name in the
CDK code won’t work when we want to create a copy of this function in a different
stack. For other resources where we didn’t hardcode a name (either because the
resource doesn’t support a name at all, or if it does we just let CDK fill in the
name with a randomly-generated value ), we don’t need to do anything here.

Step:3: Make the resources unique


We’ll accomplish this unique naming using a disambiguator, which is just a fancy
way of saying a unique string value.

Here is the commit for all files we are changing in this section.

Here are the important changes in that commit:

We pass our username as the unique string (disambiguator) to all of our service
stacks’ deployment properties from our dev stacks (devApp.ts).

We are making our Lambda name unique: we add the readonly disambiguator?: string;
parameter to the ServiceStackProps interface, then use this parameter in the name
of our Lambda (serviceStack.ts):

functionName: `CdkWorkshopalias${props.disambiguator ? `-${props.disambiguator}` :


''}`
Similarly, we are making our state machine unique (stepFunctionStack.ts).

Similarly, we are making our CloudFormation dashboard names unique


(monitoringStack.ts).

Note
Be careful to replace the single quotes ( ' ) with backticks ( ` ) for any names
using the disambiguator variable in them. JavaScript does not perform string
interpolation (see https://stackoverflow.com/a/44886742 ) unless the string is
quoted with backticks, so you’ll end up literally defining a function name called
CdkWorkshopAlias${props.disambiguator... if you don’t use them :/

Step 4: Re-deploy
Once you’ve made all the disambiguator changes to generate unique resource names in
the dev stack, build your changes and make sure they succeed:

brazil-build clean && brazil-build


If you run into a compilation issue, be sure to check the example commit and make
sure you made all the required changes.

Once everything’s working, let’s get back to what we were doing before and
deploying this new stack:

brazil-build app deploy:dev


You will now receive the following error:

CdkWorkshopUsername-Service-dev-username failed: Error [ValidationError]: Stack


[CdkWorkshopUsername-Service-dev-username] cannot be deleted while
TerminationProtection is enabled
Huh?? I thought we were supposed to have fixed all the issues. And anyways, wasn’t
I trying to create a stack? Why would it be trying to delete this stack?

Well, yeah, we have fixed the issues with regard to resource naming collisions, but
unfortunately we’ve hit another sharp edge of deploying CloudFormation. When you
try to deploy a CloudFormation stack for the first time (like how we tried to
deploy our personal service stack earlier) and something goes wrong, CloudFormation
will roll back the changes by undoing everything it tried to do, at which point it
goes into a ROLLBACK_COMPLETE state. The only thing remaining at this point is the
empty stack itself, which sticks around so that you can inspect the event log to
see what happened and fix it (what we just did).

However, once you’ve figured out and fixed the problem, it’s not possible to update
this same CloudFormation stack again. You instead need to delete and re-create it
from scratch. This is what CDK is trying to do for you. However, Termination
Protection is a safety feature of CloudFormation which is enabled on your
CloudFormation stacks by default. You can disable the termination protection by
running the following commands:

aws --region us-west-2 cloudformation update-termination-protection \


--no-enable-termination-protection --stack-name CdkWorkshopUsername-Service-dev-
username

aws --region us-west-2 cloudformation update-termination-protection \


--no-enable-termination-protection --stack-name CdkWorkshopUsername-Infra-dev-
username

aws --region us-west-2 cloudformation update-termination-protection \


--no-enable-termination-protection --stack-name CdkWorkshopUsername-Monitoring-dev-
username
This will free CDK up to delete all our failed development stacks so that we can
try again, now that we’ve fixed the issue. This is exactly the type of destructive,
ad-hoc actions we set our personal stack up to allow for, without touching the main
application.

Once that’s done, let’s re-deploy and we should expect success:

brazil-build app deploy:dev


If you don’t see success for some reason, you might have missed a disambiguation.
Have a look at the example commit which shows all the changes for this section
together. Once you’ve found and fixed your issue, you already know what to do next!

Step 5: Verify
If all went well, then you can log to your AWS console and verify that you have 4
CloudFormation stacks with -dev-username in their names which exactly mirror the -
alpha stacks. These make up your personal development environment, which you can
change, test with, tear down and re-create without any fear of impacting others.

Step 6: Deploy to the stack


Want to try this out? Let’s make a small change to our application and then test it
in our dev stack before pushing it to mainline and deploying to alpha. We will
modify the name of a CloudWatch dashboard.

Find the following line in your monitoringStack.ts file:

markdown: `# Summary dashboard`,


and change it to:

markdown: `# CDK Workshop Calculator Summary Dashboard`,


Now, let’s deploy our changes to our dev stack:

brazil-build app deploy:dev


After this is complete, verify that your CloudWatch dashboard name was changed:
Open the AWS Console and search for CloudWatch in us-west-2 (Oregon). Click
Dashboards and you should see two Summary dashboards - one named
CdkWorkshopUsername-Summary, which is our live dashboard from our alpha
application, and one named CdkWorkshopUsername-Summary-dev-username, which is our
personal development version. Compare the personal development dashboard to the
main dashboard and you should see the change that you made locally.

Note about CloudWatch


We haven’t talked much about the monitoring stack yet in this workshop, but this
stack contains CloudWatch resources, including alarms and dashboards that you can
use to monitor the health of your live application. There’s so much to learn about
best practices for monitoring at Amazon, if you’re interested in learning more
about this often-overlooked subject we hope you’ll check out this excellent
Principals of Amazon talk from dyanacek@ (Senior Principal Engineer in AWS Lambda)
on the subject.

Note about brazil-build app deploy:dev:


This command also deploys to the other dev stacks, even though we haven’t made
changes to them. You might have noticed that deploying the Service stack seems to
take quite a while every time we do it, particularly when updating the
AWS::Lambda::Alias resource. That’s because a Lambda Alias is a resource which
controls customer traffic to your service, shifting customers from one version of
your code to another over a configurable period of time, so updating is actually
more like a deployment than an update and takes a lot longer.

To make things faster, we can add another custom script deploy-only:dev::

"deploy-only:dev": "brazil-build cdk --require-approval never -a 'npx ts-node


lib/devApp.ts' deploy --exclusively"
And then run this command instead of the one above, which will use the --
exclusively flag to disable CDK default behavior to only deploy to the specified
stack in situations like this where we know there is no need to update those they
may depend on first:

brazil-build app deploy-only:dev CdkWorkshopUsername-Monitoring-dev-username


Step 7: Commit and push
You are doing great! You successfully tested a real change by deploying it to a
personal dev stack without impacting our actual application. Now you’re ready to
push this change through your pipeline:

git add .
git commit -am "Add personal development app"
git push
After your code moved through the pipeline, everything should be green.

Step 8: Recap
Our goal in this section was to set up a duplicate service under our personal
namespace that doesn’t interfere with the pipeline and its alpha stacks. However,
we ran into some issues that ended up teaching us a bit about how CDK works under
the hood, some of its limitations, and how to resolve them. Let’s take a trip down
memory lane:

We introduced a new application which we can invoke with custom scripts in


package.json, which translate to build targets in Brazil. This helps us keep the
logic for deploying to a personal bootstrap stack separate from the logic to deploy
to the pipeline, and avoid complicating the codebase.

We resolved resource name conflicts by adding a disambiguator, which happens to be


our user name in this case.

We learned how CloudFormation behaves when a new stack rolls back due to an issue.

We learned about CloudFormation Termination Protection, a safety feature which is


enabled by default for stacks created with CDK, and how to disable it when needed.

This is the set of problems we encountered during this lab and the solutions for
them we decided to use. These are not the only ways of solving these problems. For
instance, instead of disambiguating resources, we could simply deploy our dev app
to a different AWS account than our main app by changing the account ID in that
file (in fact, this is usually best practice for a real application). Or, we could
have changed the AWS region to which the dev stack deploys, since most of the
resources we needed to change are unique by account and region. In the real world,
you will have to weigh these options and the downsides/benefits of each to
determine what’s right for your use case.

Next
Day 4: Test your app

Day 4: Test your app


What will you have at the end of this day? You will add fine-grained tests to your
app.
What will you learn? How to use the Jest Framework to test your infrastructure
using fine-grained tests.

Introduction
In this part you will learn about how to test CDK constructs as part of your build.
In Day 3, you learned about Personal Bootstrap stack, which is also related to
testing. The difference here is that with Personal Bootstrap stacks you get to test
your CDK application end to end, but in this section we are focusing on testing
constructs and stacks in build time. It might help to think of the distinction as
difference between unit tests and integration/end-to-end tests. We will be demoing
Fine-grained assertions tests using a popular testing frameworks called Jest . If
you want to get more context and learn more, see Testing constructs in the AWS
Cloud Development Kit Developer Guide.

Step 1: Add fine-grained assertions tests


Fine-grained assertions test specific aspects of the generated AWS CloudFormation
template, such as resource with id Table must have a property named BillingMode
with value PAY_PER_REQUEST.

To start out writing tests, let’s create these files within the /test directory
(where our existing .ts test file is):

touch test/infra.test.ts
touch test/pipeline.test.ts
Here’s an example test written for the resources generated in our infra stack:

import { Template } from 'aws-cdk-lib/assertions';


import { DeploymentStack } from '@amzn/pipelines';
import { app } from '../lib/app';

// Assertion tests
test('create expected Infra Resources', () => {
const infraStack = app.node.findChild('CdkWorkshopUsername-Infra-alpha') as
DeploymentStack;
const template = Template.fromStack(infraStack);
template.hasResourceProperties('AWS::DynamoDB::Table', {
BillingMode: 'PAY_PER_REQUEST',
});

// TODO can we test other resources' properties? Try on your own:


// TODO Assertion that AWS::SNS::Topic resource has a DisplayName property with
value "VerificationTopic"
// TODO Assertion that AWS::SNS::Subscription resource has an Endpoint property
with value "<alias>@amazon.com"
});
Try adding more assertion tests against resources in the infra stack on your own.

We can also add specific assertions against CDK pipeline. This will ensure the
stages are created exactly where needed, with the names as expected, etc.

import {
DeploymentGroupCfnSubTarget,
DeploymentGroupTarget,
DeploymentPipeline,
Stage as PipelineStage,
} from '@amzn/pipelines';
import { app, applicationAccount } from '../lib/app';

const testStages = (pipelineStage: PipelineStage): void => {


const stageAccountMap = new Map<string, Array<string>>();
stageAccountMap.set('alpha', [

`arn:aws:cloudformation:us-west-2:${applicationAccount}:stack/CdkWorkshopUsername-
Infra-alpha`,

`arn:aws:cloudformation:us-west-2:${applicationAccount}:stack/CdkWorkshopUsername-
Service-alpha`,

`arn:aws:cloudformation:us-west-2:${applicationAccount}:stack/CdkWorkshopUsername-
StepFunction-alpha`,

`arn:aws:cloudformation:us-west-2:${applicationAccount}:stack/CdkWorkshopUsername-
Monitoring-alpha`,
]);

stageAccountMap.set('Pipeline', [
`arn:aws:cloudformation:us-east-1:${applicationAccount}:stack/Pipeline-
CdkWorkshopUsername`,
]);

pipelineStage?.targets.forEach((target) => {
const dgGrpTarget = target as DeploymentGroupTarget;
const actualPipelineStageArns = dgGrpTarget.subTargets.map((subTarget) => {
const dgCfnSubTarget: DeploymentGroupCfnSubTarget = subTarget as
DeploymentGroupCfnSubTarget;
return dgCfnSubTarget.stack.toStackArn;
});

expect(actualPipelineStageArns.sort()).toEqual(stageAccountMap.get(pipelineStage.na
me)!.sort());
});
};

test('create expected stages', () => {


const pipeline = app.node.findChild('Pipeline') as DeploymentPipeline;
const versionSetStage = pipeline.versionSetStage;
const packagingStage = pipeline.getStage('Packaging');
const pipelineStage = pipeline.getStage('Pipeline');
const alphaStg = pipeline.getStage('alpha');

// Ensure each stage is defined


expect(versionSetStage).toBeDefined();
expect(packagingStage).toBeDefined();
expect(pipelineStage).toBeDefined();
expect(alphaStg).toBeDefined();

// Ensure Approval Workflows are attached to correct stages


expect(alphaStg?.getApprovalWorkflow('Approval Workflow')).toBeDefined();

// Ensure Pipeline is in correct order


expect(versionSetStage.nextStage).toEqual(packagingStage);
expect(packagingStage!.nextStage).toEqual(pipelineStage);
expect(pipelineStage!.nextStage).toEqual(alphaStg);

testStages(pipelineStage!);
testStages(alphaStg!);
});
You will also need to expose the app and applicationAccount variables for testing
from your app.ts file by exporting them:

// Set up your CDK App


export const app = new App();

export const applicationAccount = 'YOUR_AWS_ACCOUNT_ID';


Take a look at the example commit for all the code changes in this section.

Step 2: Verify
Run the tests with brazil-build:

brazil-build
The build should succeed and you should see something like:

> jest

PASS test/pipeline.test.ts (7.256 s)


PASS test/serviceStack.test.ts (7.417 s)
PASS test/infra.test.ts (7.653 s)

Test Suites: 3 passed, 3 total


Tests: 3 passed, 3 total
Snapshots: 0 total
Time: 9.951 s
If all looks good, feel free to commit and push your changes.

Congratulations!🤘

Not only have you written infrastructure-as-code using CDK, you have also tested
your infrastructure’s correctness even before it deploys. Writing and committing
these tests will ensure the quality and correctness of your code is maintained even
as changes are made to it over time.

Next
Conclusion

Conclusion
Cleanup
It is important to clean up the artifacts you created while working through the
workshop. This helps to keep the tools and workflow tidy. You will need to clean up
the:

Burner account

Pipeline

Version set and group

Packages

You will be using BuilderHub Create to clean up your application.

Go to the My Applications page .

Find your CDKWorkshop application, named CDKWorshopUsername.

Click Start Cleanup in the right column, you will be redirected to the cleanup
page.
On that page, follow the onscreen instructions for cleaning up the application
resources. Start by granting permissions, and once done, click the start cleanup
button. Wrap up by cleaning up the burner.

Join the slack channel


If you want to stay up-to-date with the technology landscape and help others, join
the #ce slack channel.

What’s next
With the help of CDK, the power of Infrastructure as Code is at your fingertips and
the possibilities are endless! Here are some more packages we recommend to check
out if you want to learn more about using CDK internally at Amazon as well as CDK
pipelines you can check out for more examples on how teams use these packages to
build applications that run in production.

Noteworthy construct libraries


BONESConstructs – you can learn about various ways you an enhance your pipeline
(e.g. add more stages, set up rollback monitors, add automated time window
blockers, etc.)

PipelinesConstructs – the package README is excellent documentation on pipelines


CDK constructs

MoteCDK – secure-by-default AWS CDK constructs. Please use them whenever possible.

HCMoteConstructs – use this on top of MoteCDK. Use mote-constructs-cloudauth to


automate creation of CloudAuth resources and use mote-constructs-internal to help
establish Privatelinks.

MonitoringCDKConstructs – has higher level monitoring related constructs compared


to vanilla CDK and will make your life easier when setting up dashboards and
ticketing across all your alarms.

SuperStarProvisionerCDKConstructs – game changer when it comes to establishing


connectivity from your service to a dependency. Current supported scenarios
include NAWS to NAWS and NAWS to MAWS. If your use case is MAWS to NAWS, then use
SecurePrivatelinkEgress from mote-constructs-internal Follow Superstar CDK Guide
on how to get started. Also check out our Client Connectivity Guide that provides
more step by step instructions around establishing connectivity from your NAWS
service to another service in MAWS or NAWS.

Example CDK pipelines to look at


AudibleAdCuePointGeneration – lambda application that establishes connectivity
with Sable and other internal MAWS services. Check out CDK package for examples on
personal bootstrap stacks we talked about earlier in this workshop, using MoteCDK ,
MonitoringCDKConstructs , and SuperStarProvisionerCDKConstructs , as well as
snapshot testing .

AudibleAdDeliveryService – ECS Fargate Coral service whose clients are in MAWS


(i.e. you can find example of establishing MAWS to NAWS connectivity). Check out
CDK package for creating multi-region, multi-stage CDK pipeline, setting up OneBox,
load and canary testing with Hydra as part of pipeline approval workflows, snapshot
and fine-grained assertion testing .

You might also like