CIS Amazon Web Services Foundations Benchmark v4.0.1
CIS Amazon Web Services Foundations Benchmark v4.0.1
Services Foundations
Benchmark
v4.0.1 - 12-11-2024
For information on referencing and/or citing CIS Benchmarks in 3rd party documentation
(including using portions of Benchmark Recommendations) please contact CIS Legal
([email protected]) and request guidance on copyright usage.
NOTE: It is NEVER acceptable to host a CIS Benchmark in ANY format (PDF, etc.)
on a 3rd party (non-CIS owned) site.
Page 1
Page 2
Page 3
These tools make the hardening process much more scalable for large numbers of
systems and applications.
NOTE: Some tooling focuses only on the Benchmark Recommendations that can
be fully automated (skipping ones marked Manual). It is important that ALL
Recommendations (Automated and Manual) be addressed since all are
important for properly securing systems and are typically in scope for
audits.
Key Stakeholders
Cybersecurity is a collaborative effort, and cross functional cooperation is imperative
within an organization to discuss, test, and deploy Benchmarks in an effective and
efficient way. The Benchmarks are developed to be best practice configuration
guidelines applicable to a wide range of use cases. In some organizations, exceptions
to specific Recommendations will be needed, and this team should work to prioritize the
problematic Recommendations based on several factors like risk, time, cost, and labor.
These exceptions should be properly categorized and documented for auditing
purposes.
Page 4
• Use the most recent version of a Benchmark: This is true for all Benchmarks,
but especially true for cloud technologies. Cloud technologies change frequently
and using an older version of a Benchmark may have invalid methods for
auditing and remediation.
Exceptions
The guidance items in the Benchmarks are called recommendations and not
requirements, and exceptions to some of them are expected and acceptable. The
Benchmarks strive to be a secure baseline, or starting point, for a specific technology,
with known issues identified during Benchmark development are documented in the
Impact section of each Recommendation. In addition, organizational, system specific
requirements, or local site policy may require changes as well, or an exception to a
Recommendation or group of Recommendations (e.g. A Benchmark could Recommend
that a Web server not be installed on the system, but if a system's primary purpose is to
function as a Webserver, there should be a documented exception to this
Recommendation for that specific server).
It is the responsibility of the organization to determine their overall security policy, and
which settings are applicable to their unique needs based on the overall risk profile for
the organization.
Page 5
NOTE: As previously stated, the PDF versions of the CIS Benchmarks™ are
available for free, non-commercial use on the CIS Website. All other formats
of the CIS Benchmarks™ (MS Word, Excel, and Build Kits) are available for
CIS SecureSuite® members.
Page 6
Intended Audience
This document is intended for system and application administrators, security
specialists, auditors, help desk, platform deployment, and/or DevOps personnel who
plan to develop, deploy, assess, or secure solutions in Amazon Web Services.
Page 7
Page 8
Convention Meaning
Page 9
Title
Concise description for the recommendation's intended configuration.
Assessment Status
An assessment status is included for every recommendation. The assessment status
indicates whether the given recommendation can be automated or requires manual
steps to implement. Both statuses are equally important and are determined and
supported as defined below:
Automated
Represents recommendations for which assessment of a technical control can be fully
automated and validated to a pass/fail state. Recommendations will include the
necessary information to implement automation.
Manual
Represents recommendations for which assessment of a technical control cannot be
fully automated and requires all or some manual steps to validate that the configured
state is set as expected. The expected state can vary depending on the environment.
Profile
A collection of recommendations for securing a technology or a supporting platform.
Most benchmarks include at least a Level 1 and Level 2 Profile. Level 2 extends Level 1
recommendations and is not a standalone profile. The Profile Definitions section in the
benchmark provides the definitions as they pertain to the recommendations included for
the technology.
Description
Detailed information pertaining to the setting with which the recommendation is
concerned. In some cases, the description will include the recommended value.
Rationale Statement
Detailed reasoning for the recommendation to provide the user a clear and concise
understanding on the importance of the recommendation.
Page 10
Audit Procedure
Systematic instructions for determining if the target system complies with the
recommendation.
Remediation Procedure
Systematic instructions for applying recommendations to the target system to bring it
into compliance according to the recommendation.
Default Value
Default value for the given setting in this recommendation, if known. If not known, either
not configured or not defined will be applied.
References
Additional documentation relative to the recommendation.
Additional Information
Supplementary information that does not correspond to any other field but may be
useful to the user.
Page 11
• Level 1
• Level 2
This profile extends the "Level 1" profile. Items in this profile exhibit one or more
of the following characteristics:
o are intended for environments or use cases where security is more critical
than manageability and usability
o acts as defense in depth measure
o may impact the utility or performance of the technology
o may include additional licensing, cost, or addition of third party software
Page 12
Contributor
Amol Pathak
Rob Witoff
John Martinez
Darwin Sanoy
Ionut Dragoi
John Robel
Mike Wicks
Aditi Sahasrabudhe
Parag Patil
Pradeep R B
Jeremy Phillips
Maril Vernon
Paul Campbell
Ankit Rao
Steve Laino
Lawrence Sica
Nick Gibbon
Lewis Hardy
Logan McMillan
Darren Joyce
Bhushan Bhat
Ian McRee
Jason Kao
Cody Bruno
Lawrence Grim
SnowWolf Wagner
Gareth Boyes
Chantel Duckworth
Austin Songer
Zan Liffick
Editor
Iben Rodriguez
Gregory Carpenter
Zan Liffick
Rachel Rice
Page 13
Page 14
• Level 1
Description:
Ensure contact email and telephone details for AWS accounts are current and map to
more than one individual in your organization.
An AWS account supports a number of contact details, and AWS will use these to
contact the account owner if activity judged to be in breach of the Acceptable Use Policy
or indicative of a likely security compromise is observed by the AWS Abuse team.
Contact details should not be for a single individual, as circumstances may arise where
that individual is unavailable. Email contact details should point to a mail alias which
forwards email to multiple individuals within the organization; where feasible, phone
contact details should point to a PABX hunt group or other call-forwarding system.
Rationale:
If an AWS account is observed to be behaving in a prohibited or suspicious manner,
AWS will attempt to contact the account owner by email and phone using the contact
details listed. If this is unsuccessful and the account behavior needs urgent mitigation,
proactive measures may be taken, including throttling of traffic between the account
exhibiting suspicious behavior and the AWS API endpoints and the Internet. This will
result in impaired service to and from the account in question, so it is in both the
customers' and AWS's best interests that prompt contact can be established. This is
best achieved by setting AWS account contact details to point to resources which have
multiple individuals as recipients, such as email aliases and PABX hunt groups.
Audit:
This activity can only be performed via the AWS Console, with a user who has
permission to read and write Billing information (aws-portal:*Billing).
1. Sign in to the AWS Management Console and open the Billing and Cost
Management console at https://console.aws.amazon.com/billing/home#/.
2. On the navigation bar, choose your account name, and then choose Account.
3. On the Account Settings page, review and verify the current details.
4. Under Contact Information, review and verify the current details.
Remediation:
This activity can only be performed via the AWS Console, with a user who has
permission to read and write Billing information (aws-portal:*Billing).
Page 15
References:
1. https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/manage-account-
payment.html#contact-info
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 16
• Level 1
Description:
AWS provides customers with the option of specifying the contact information for
account's security team. It is recommended that this information be provided.
Rationale:
Specifying security-specific contact information will help ensure that security advisories
sent by AWS reach the team in your organization that is best equipped to respond to
them.
Audit:
Perform the following to determine if security contact information is present:
From Console:
1. Click on your account name at the top right corner of the console
2. From the drop-down menu Click My Account
3. Scroll down to the Alternate Contacts section
4. Ensure contact information is specified in the Security section
Remediation:
Perform the following to establish security contact information:
From Console:
1. Click on your account name at the top right corner of the console.
2. From the drop-down menu Click My Account
3. Scroll down to the Alternate Contacts section
4. Enter contact information in the Security section
Page 17
1. CCE-79200-2
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 18
• Level 1
Description:
The AWS support portal allows account owners to establish security questions that can
be used to authenticate individuals calling AWS customer service for support. It is
recommended that security questions be established.
Rationale:
When creating a new AWS account, a default super user is automatically created. This
account is referred to as the 'root user' or 'root' account. It is recommended that the use
of this account be limited and highly controlled. During events in which the 'root'
password is no longer accessible or the MFA token associated with 'root' is
lost/destroyed it is possible, through authentication using secret questions and
associated answers, to recover 'root' user login access.
Audit:
From Console:
Remediation:
From Console:
Page 19
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 20
• Level 1
Description:
The 'root' user account is the most privileged user in an AWS account. AWS Access
Keys provide programmatic access to a given AWS account. It is recommended that all
access keys associated with the 'root' user account be deleted.
Rationale:
Deleting access keys associated with the 'root' user account limits vectors by which the
account can be compromised. Additionally, deleting the 'root' access keys encourages
the creation and use of role based accounts that are least privileged.
Audit:
Perform the following to determine if the 'root' user account has access keys:
From Console:
1. Sign in to the AWS Management Console as 'root' and open the IAM console at
https://console.aws.amazon.com/iam/.
2. Click on <root_account> at the top right and select My Security Credentials
from the drop down list.
3. On the pop out screen Click on Continue to Security Credentials.
Page 21
Note: While a key can be made inactive, this inactive key will still show up in the CLI
command from the audit procedure, and may lead to the root user being falsely flagged
as being non-compliant.
References:
1. http://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-
practices.html
2. http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html
3. http://docs.aws.amazon.com/IAM/latest/APIReference/API_GetAccountSummary
.html
4. CCE-78910-7
5. https://aws.amazon.com/blogs/security/an-easier-way-to-determine-the-
presence-of-aws-account-access-keys/
Additional Information:
• IAM User account "root" for us-gov cloud regions is not enabled by default.
However, on request to AWS support enables 'root' access only through access-
keys (CLI, API methods) for us-gov cloud region.
• Implement regular checks and alerts for any creation of new root access keys to
promptly address any unauthorized or accidental creation.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 22
• Level 1
Description:
The 'root' user account is the most privileged user in an AWS account. Multi-factor
Authentication (MFA) adds an extra layer of protection on top of a username and
password. With MFA enabled, when a user signs in to an AWS website, they will be
prompted for their username and password as well as for an authentication code from
their AWS MFA device.
Note: When virtual MFA is used for 'root' accounts, it is recommended that the device
used is NOT a personal device, but rather a dedicated mobile device (tablet or phone)
that is kept charged and secured, independent of any individual personal devices ("non-
personal virtual MFA"). This lessens the risks of losing access to the MFA due to device
loss, device trade-in, or if the individual owning the device is no longer employed at the
company.
Rationale:
Enabling MFA provides increased security for console access as it requires the
authenticating principal to possess a device that emits a time-sensitive key and have
knowledge of a credential.
Audit:
Perform the following to determine if the 'root' user account has MFA setup:
From Console:
Page 24
Remediation:
Note: To manage MFA devices for the 'root' AWS account, you must use your 'root'
account credentials to sign in to AWS. You cannot manage MFA devices for the 'root'
account using other credentials.
Perform the following to establish MFA for the 'root' user account:
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. Choose Dashboard , and under Security Status , expand Activate MFA on
your root account.
3. Choose Activate MFA
4. In the wizard, choose A virtual MFA device and then choose Next Step .
5. IAM generates and displays configuration information for the virtual MFA device,
including a QR code graphic. The graphic is a representation of the 'secret
configuration key' that is available for manual entry on devices that do not
support QR codes.
6. Open your virtual MFA application. (For a list of apps that you can use for hosting
virtual MFA devices, see Virtual MFA Applications.) If the virtual MFA application
supports multiple accounts (multiple virtual MFA devices), choose the option to
create a new account (a new virtual MFA device).
7. Determine whether the MFA app supports QR codes, and then do one of the
following:
o Use the app to scan the QR code. For example, you might choose the
camera icon or choose an option similar to Scan code, and then use the
device's camera to scan the code.
o In the Manage MFA Device wizard, choose Show secret key for manual
configuration, and then type the secret configuration key into your MFA
application.
When you are finished, the virtual MFA device starts generating one-time passwords.
In the Manage MFA Device wizard, in the Authentication Code 1 box, type the one-time
password that currently appears in the virtual MFA device. Wait up to 30 seconds for
the device to generate a new one-time password. Then type the second one-time
password into the Authentication Code 2 box. Choose Assign Virtual MFA.
References:
1. CCE-78911-5
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html#id_root-
user_manage_mfa
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_
virtual.html#enable-virt-mfa-for-root
Page 25
Controls
Control IG 1 IG 2 IG 3
Version
Page 26
• Level 2
Description:
The 'root' user account is the most privileged user in an AWS account. MFA adds an
extra layer of protection on top of a user name and password. With MFA enabled, when
a user signs in to an AWS website, they will be prompted for their user name and
password as well as for an authentication code from their AWS MFA device. For Level
2, it is recommended that the 'root' user account be protected with a hardware MFA.
Rationale:
A hardware MFA has a smaller attack surface than a virtual MFA. For example, a
hardware MFA does not suffer the attack surface introduced by the mobile smartphone
on which a virtual MFA resides.
Note: Using hardware MFA for numerous AWS accounts may create a logistical device
management issue. If this is the case, consider implementing this Level 2
recommendation selectively for the highest security AWS accounts, while applying the
Level 1 recommendation to the remaining accounts.
Audit:
Perform the following to determine if the 'root' user account has a hardware MFA setup:
1. Run the following command to determine if the 'root' account has MFA setup:
Page 27
Remediation:
Note: To manage MFA devices for the AWS 'root' user account, you must use your
'root' account credentials to sign in to AWS. You cannot manage MFA devices for the
'root' account using other credentials.
Perform the following to establish a hardware MFA for the 'root' user account:
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. Choose Dashboard, and under Security Status, expand Activate MFA on
your root account.
3. Choose Activate MFA.
4. In the wizard, choose A hardware MFA device and then choose Next Step.
5. In the Serial Number box, enter the serial number that is found on the back of
the MFA device.
6. In the Authentication Code 1 box, enter the six-digit number displayed by the
MFA device. You might need to press the button on the front of the device to
display the number.
7. Wait 30 seconds while the device refreshes the code, and then enter the next
six-digit number into the Authentication Code 2 box. You might need to press
the button on the front of the device again to display the second number.
8. Choose Next Step. The MFA device is now associated with the AWS account.
The next time you use your AWS account credentials to sign in, you must type a
code from the hardware MFA device.
1. CCE-78911-5
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_
virtual.html
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_
physical.html#enable-hw-mfa-for-root
Additional Information:
IAM User account 'root' for us-gov cloud regions does not have console access. This
control is not applicable for us-gov cloud regions.
Page 28
Controls
Control IG 1 IG 2 IG 3
Version
Page 29
• Level 1
Description:
With the creation of an AWS account, a 'root user' is created that cannot be disabled or
deleted. That user has unrestricted access to and control over all resources in the AWS
account. It is highly recommended that the use of this account be avoided for everyday
tasks.
Rationale:
The 'root user' has unrestricted access to and control over all account resources. Use of
it is inconsistent with the principles of least privilege and separation of duties, and can
lead to unnecessary harm due to error or account compromise.
Audit:
From Console:
Page 30
Remember, anyone who has 'root' user credentials for your AWS account has
unrestricted access to and control of all the resources in your account, including billing
information.
References:
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_root-user.html
3. https://docs.aws.amazon.com/general/latest/gr/aws_tasks-that-require-root.html
Additional Information:
The 'root' user for us-gov cloud regions is not enabled by default. However, on request
to AWS support, they can enable the 'root' user and grant access only through access-
keys (CLI, API methods) for us-gov cloud region. If the 'root' user for us-gov cloud
regions is enabled, this recommendation is applicable.
Monitoring usage of the 'root' user can be accomplished by implementing
recommendation 3.3 Ensure a log metric filter and alarm exist for usage of the 'root'
user.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 31
• Level 1
Description:
Password policies are, in part, used to enforce password complexity requirements. IAM
password policies can be used to ensure passwords are at least a given length. It is
recommended that the password policy require a minimum password length 14.
Rationale:
Setting a password complexity policy increases account resiliency against brute force
login attempts.
Impact:
Enforcing a minimum password length of 14 characters enhances security by making
passwords more resistant to brute force attacks. However, it may require users to
create longer and potentially more complex passwords, which could impact user
convenience.
Audit:
Perform the following to ensure the password policy is configured as prescribed:
From Console:
Page 32
1. CCE-78907-3
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_a
ccount-policy.html
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-
practices.html#configure-strong-password-policy
Additional Information:
Ensure the password policy also includes requirements for password complexity, such
as the inclusion of uppercase letters, lowercase letters, numbers, and special
characters:
aws iam update-account-password-policy --require-uppercase-characters --
require-lowercase-characters --require-numbers --require-symbols
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 33
• Level 1
Description:
IAM password policies can prevent the reuse of a given password by the same user. It
is recommended that the password policy prevent the reuse of passwords.
Rationale:
Preventing password reuse increases account resiliency against brute force login
attempts.
Audit:
Perform the following to ensure the password policy is configured as prescribed:
From Console:
Page 34
1. CCE-78908-1
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_a
ccount-policy.html
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-
practices.html#configure-strong-password-policy
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 35
• Level 1
Description:
Multi-Factor Authentication (MFA) adds an extra layer of authentication assurance
beyond traditional credentials. With MFA enabled, when a user signs in to the AWS
Console, they will be prompted for their user name and password as well as for an
authentication code from their physical or virtual MFA token. It is recommended that
MFA be enabled for all accounts that have a console password.
Rationale:
Enabling MFA provides increased security for console access as it requires the
authenticating principal to possess a device that displays a time-sensitive key and have
knowledge of a credential.
Impact:
AWS will soon end support for SMS multi-factor authentication (MFA). New customers
are not allowed to use this feature. We recommend that existing customers switch to an
alternative method of MFA.
Audit:
Perform the following to determine if a MFA device is enabled for all IAM users having a
console password:
From Console:
1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users
along with their password and MFA status:
Page 36
2. The output of this command will produce a table similar to the following:
user,password_enabled,mfa_active
elise,false,false
brandon,true,true
rakesh,false,false
helene,false,false
paras,true,true
anitha,false,false
Remediation:
Perform the following to enable MFA:
From Console:
1. Sign in to the AWS Management Console and open the IAM console at
'https://console.aws.amazon.com/iam/'
2. In the left pane, select Users.
3. In the User Name list, choose the name of the intended MFA user.
4. Choose the Security Credentials tab, and then choose Manage MFA Device.
5. In the Manage MFA Device wizard, choose Virtual MFA device, and then
choose Continue.
IAM generates and displays configuration information for the virtual MFA device,
including a QR code graphic. The graphic is a representation of the 'secret configuration
key' that is available for manual entry on devices that do not support QR codes.
6. Open your virtual MFA application. (For a list of apps that you can use for hosting
virtual MFA devices, see Virtual MFA Applications at
https://aws.amazon.com/iam/details/mfa/#Virtual_MFA_Applications). If the
virtual MFA application supports multiple accounts (multiple virtual MFA devices),
choose the option to create a new account (a new virtual MFA device).
7. Determine whether the MFA app supports QR codes, and then do one of the
following:
Page 37
When you are finished, the virtual MFA device starts generating one-time passwords.
8. In the Manage MFA Device wizard, in the MFA Code 1 box, type the one-time
password that currently appears in the virtual MFA device. Wait up to 30
seconds for the device to generate a new one-time password. Then type the
second one-time password into the MFA Code 2 box.
9. Click Assign MFA.
References:
1. https://tools.ietf.org/html/rfc6238
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa.html
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#enable-
mfa-for-privileged-users
4. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_enable_
virtual.html
5. CCE-78901-6
6. https://blogs.aws.amazon.com/security/post/Tx2SJJYE082KBUK/How-to-
Delegate-Management-of-Multi-Factor-Authentication-to-AWS-IAM-Users
Additional Information:
Forced IAM User Self-Service Remediation
Amazon has published a pattern that requires users to set up MFA through self-service
before they gain access to their complete set of permissions. Until they complete this
step, they cannot access their full permissions. This pattern can be used for new AWS
accounts. It can also be applied to existing accounts; it is recommended that users
receive instructions and a grace period to complete MFA enrollment before active
enforcement on existing AWS accounts.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 38
Page 39
• Level 1
Description:
AWS console defaults to no check boxes selected when creating a new IAM user.
When creating the IAM User credentials you have to determine what type of access
they require.
Programmatic access: The IAM user might need to make API calls, use the AWS CLI,
or use the Tools for Windows PowerShell. In that case, create an access key (access
key ID and a secret access key) for that user.
AWS Management Console access: If the user needs to access the AWS Management
Console, create a password for the user.
Rationale:
Requiring the additional steps to be taken by the user for programmatic access after
their profile has been created will provide a stronger indication of intent that access keys
are [a] necessary for their work and [b] that once the access key is established on an
account, the keys may be in use somewhere in the organization.
Note: Even if it is known the user will need access keys, require them to create the keys
themselves or put in a support ticket to have them created as a separate step from user
creation.
Audit:
Perform the following steps to determine if unused access keys were created upon user
creation:
From Console:
• Keys that were created at the same time as the user profile and do not have a
last used date should be deleted. Refer to the remediation below.
Page 40
1. Run the following command (OSX/Linux/UNIX) to generate a list of all IAM users
along with their access keys utilization:
2. The output of this command will produce a table similar to the following:
user,password_enabled,access_key_1_active,access_key_1_last_used_date,access_
key_2_active,access_key_2_last_used_date
elise,false,true,2015-04-16T15:14:00+00:00,false,N/A
brandon,true,true,N/A,false,N/A
rakesh,false,false,N/A,false,N/A
helene,false,true,2015-11-18T17:47:00+00:00,false,N/A
paras,true,true,2016-08-28T12:04:00+00:00,true,2016-03-04T10:11:00+00:00
anitha,true,true,2016-06-08T11:43:00+00:00,true,N/A
Remediation:
Perform the following to delete access keys that do not pass the audit:
From Console:
• Click on the X (Delete) for keys that were created at the same time as the user
profile but have not been used.
7. As an IAM User
• Click on the X (Delete) for keys that were created at the same time as the user
profile but have not been used.
Page 41
References:
1. https://docs.aws.amazon.com/cli/latest/reference/iam/delete-access-key.html
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_users_create.html
Additional Information:
Credential report does not appear to contain "Key Creation Date"
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 42
• Level 1
Description:
AWS IAM users can access AWS resources using different types of credentials, such
as passwords or access keys. It is recommended that all credentials that have been
unused for 45 days or more be deactivated or removed.
Rationale:
Disabling or removing unnecessary credentials will reduce the window of opportunity for
credentials associated with a compromised or abandoned account to be used.
Audit:
Perform the following to determine if unused credentials exist:
From Console:
9. Check and ensure that Access key age is less than 45 days and that Access
key last used does not say None
If the user hasn't signed into the Console in the last 45 days or Access keys are over 45
days old refer to the remediation.
From Command Line:
Download Credential Report:
Page 43
Remediation:
From Console:
Perform the following to manage Unused Password (IAM user console access)
Page 44
7. Select any access keys that are over 45 days old and that have not been used
and
References:
1. CCE-78900-8
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#remove-
credentials
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-
unused.html
4. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_passwords_a
dmin-change-user.html
5. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-
keys.html
Additional Information:
<root_account> is excluded in the audit since the root account should not be used for
day-to-day business and would likely be unused for more than 45 days.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 45
• Level 1
Description:
Access keys are long-term credentials for an IAM user or the AWS account 'root' user.
You can use access keys to sign programmatic requests to the AWS CLI or AWS API
(directly or using the AWS SDK)
Rationale:
One of the best ways to protect your account is to not allow users to have multiple
access keys.
Audit:
From Console:
• Repeat steps 3-5 for each IAM user in your AWS account.
1. Run list-users command to list all IAM users within your account:
2. Run list-access-keys command using the IAM user name list to return the
current status of each access key associated with the selected IAM user:
Page 46
3. Check the Status property value for each key returned to determine each key's
current state. If the Status property value for more than one IAM access key is
set to Active, the user access configuration does not adhere to this
recommendation; refer to the remediation below.
• Repeat steps 2 and 3 for each IAM user in your AWS account.
Remediation:
From Console:
1. Using the IAM user and access key information provided in the Audit CLI,
choose one access key that is less than 90 days old. This should be the only
active key used by this IAM user to access AWS resources programmatically.
Test your application(s) to make sure that the chosen access key is working.
2. Run the update-access-key command below using the IAM user name and the
non-operational access key IDs to deactivate the unnecessary key(s). Refer to
the Audit section to identify the unnecessary access key ID for the selected IAM
user
Page 47
3. To confirm that the selected access key pair has been successfully deactivated
run the list-access-keys audit command again for that IAM User:
• The command output should expose the metadata for each access key
associated with the IAM user. If the non-operational key pair(s) Status is set to
Inactive, the key has been successfully deactivated and the IAM user access
configuration adheres now to this recommendation.
4. Repeat steps 1-3 for each IAM user in your AWS account.
References:
1. https://docs.aws.amazon.com/general/latest/gr/aws-access-keys-best-
practices.html
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-
keys.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 48
• Level 1
Description:
Access keys consist of an access key ID and secret access key, which are used to sign
programmatic requests that you make to AWS. AWS users need their own access keys
to make programmatic calls to AWS from the AWS Command Line Interface (AWS CLI),
Tools for Windows PowerShell, the AWS SDKs, or direct HTTP calls using the APIs for
individual AWS services. It is recommended that all access keys be rotated regularly.
Rationale:
Rotating access keys will reduce the window of opportunity for an access key that is
associated with a compromised or terminated account to be used.
Access keys should be rotated to ensure that data cannot be accessed with an old key
which might have been lost, cracked, or stolen.
Audit:
Perform the following to determine if access keys are rotated as prescribed:
From Console:
Page 49
1. While the first access key is still active, create a second access key, which is
active by default. Run the following command:
2. Update all applications and tools to use the new access key.
3. Determine whether the first access key is still in use by using this command:
4. One approach is to wait several days and then check the old access key for any
use before proceeding.
Even if step 3 indicates no use of the old key, it is recommended that you do not
immediately delete the first access key. Instead, change the state of the first access key
to Inactive using this command:
aws iam update-access-key
5. Use only the new access key to confirm that your applications are working. Any
applications and tools that still use the original access key will stop working at
this point because they no longer have access to AWS resources. If you find
such an application or tool, you can switch its state back to Active to reenable the
first access key. Then return to step 2 and update this application to use the new
key.
6. After you wait some period of time to ensure that all applications and tools have
been updated, you can delete the first access key with this command:
Page 50
References:
1. CCE-78902-4
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#rotate-
credentials
3. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_finding-
unused.html
4. https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html
5. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-
keys.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 51
• Level 1
Description:
IAM users are granted access to services, functions, and data through IAM policies.
There are four ways to define policies for a user: 1) Edit the user policy directly, also
known as an inline or user policy; 2) attach a policy directly to a user; 3) add the user to
an IAM group that has an attached policy; 4) add the user to an IAM group that has an
inline policy.
Only the third implementation is recommended.
Rationale:
Assigning IAM policies solely through groups unifies permissions management into a
single, flexible layer that is consistent with organizational functional roles. By unifying
permissions management, the likelihood of excessive permissions is reduced.
Audit:
Perform the following to determine if an inline policy is set or a policy is directly attached
to users:
2. For each user returned, run the following command to determine if any policies
are attached to them:
3. If any policies are returned, the user has an inline policy or direct policy
attachment.
Remediation:
Perform the following to create an IAM group and assign a policy to it:
Page 52
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. In the navigation pane, click Groups.
3. Select the group to add a user to.
4. Click Add Users To Group.
5. Select the users to be added to the group.
6. Click Add Users.
Perform the following to remove a direct association between a user and policy:
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. In the left navigation pane, click on Users.
3. For each user:
o Select the user
o Click on the Permissions tab
o Expand Permissions policies
o Click X for each policy; then click Detach or Remove (depending on policy
type)
References:
1. http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
2. http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-
vs-inline.html
3. CCE-78912-3
Page 53
Controls
Control IG 1 IG 2 IG 3
Version
Page 54
• Level 1
Description:
IAM policies are the means by which privileges are granted to users, groups, or roles. It
is recommended and considered standard security advice to grant least privilege—that
is, granting only the permissions required to perform a task. Determine what users need
to do, and then craft policies for them that allow the users to perform only those tasks,
instead of granting full administrative privileges.
Rationale:
It's more secure to start with a minimum set of permissions and grant additional
permissions as necessary, rather than starting with permissions that are too lenient and
then attempting to tighten them later.
Providing full administrative privileges instead of restricting access to the minimum set
of permissions required for the user exposes resources to potentially unwanted actions.
IAM policies that contain a statement with "Effect": "Allow" and "Action": "*"
over "Resource": "*" should be removed.
Audit:
Perform the following to determine existing policies:
From Command Line:
2. For each policy returned, run the following command to determine if any policy is
allowing full administrative privileges on the account:
3. In the output, the policy should not contain any Statement block with "Effect":
"Allow" and Action set to "*" and Resource set to "*".
Page 55
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/iam/.
2. In the navigation pane, click Policies and then search for the policy name found
in the audit step.
3. Select the policy that needs to be deleted.
4. In the policy action menu, select Detach.
5. Select all Users, Groups, Roles that have this policy attached.
6. Click Detach Policy.
7. Select the newly detached policy and select Delete.
1. Lists all IAM users, groups, and roles that the specified managed policy is
attached to.
References:
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-
vs-inline.html
3. CCE-78912-3
Page 56
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 57
• Level 1
Description:
AWS provides a support center that can be used for incident notification and response,
as well as technical support and customer services. Create an IAM Role, with the
appropriate policy assigned, to allow authorized users to manage incidents with AWS
Support.
Rationale:
By implementing least privilege for access control, an IAM Role will require an
appropriate IAM Policy to allow Support Center Access in order to manage Incidents
with AWS Support.
Impact:
All AWS Support plans include an unlimited number of account and billing support
cases, with no long-term contracts. Support billing calculations are performed on a per-
account basis for all plans. Enterprise Support plan customers have the option to
include multiple enabled accounts in an aggregated monthly billing calculation. Monthly
charges for the Business and Enterprise support plans are based on each month's AWS
usage charges, subject to a monthly minimum, billed in advance.
When assigning rights, keep in mind that other policies may grant access to Support as
well. This may include AdministratorAccess and other policies including customer
managed policies. Utilizing the AWS managed 'AWSSupportAccess' role is one simple
way of ensuring that this permission is properly granted.
To better support the principle of separation of duties, it would be best to only attach this
role where necessary.
Audit:
From Command Line:
1. List IAM policies, filter for the 'AWSSupportAccess' managed policy, and note the
"Arn" element value:
Page 58
3. In the output, ensure PolicyRoles does not return empty. 'Example: Example:
PolicyRoles: [ ]'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "<iam_user>"
},
"Action": "sts:AssumeRole"
}
]
}
References:
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_managed-
vs-inline.html
2. https://aws.amazon.com/premiumsupport/pricing/
Page 59
Additional Information:
AWSSupportAccess policy is a global AWS resource. It has same ARN as
arn:aws:iam::aws:policy/AWSSupportAccess for every account.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 60
• Level 2
Description:
AWS access from within AWS instances can be done by either encoding AWS keys into
AWS API calls or by assigning the instance to a role which has an appropriate
permissions policy for the required access. "AWS Access" means accessing the APIs of
AWS in order to access AWS resources or manage AWS account resources.
Rationale:
AWS IAM roles reduce the risks associated with sharing and rotating credentials that
can be used outside of AWS itself. Compromised credentials can be used from outside
the AWS account to which they provide access. In contrast, to leverage role
permissions, an attacker would need to gain and maintain access to a specific instance
to use the privileges associated with it.
Additionally, if credentials are encoded into compiled applications or other hard-to-
change mechanisms, they are even less likely to be properly rotated due to the risks of
service disruption. As time passes, credentials that cannot be rotated are more likely to
be known by an increasing number of individuals who no longer work for the
organization that owns the credentials.
Audit:
First, check if the instance has any API secrets stored using Secret Scanning. Currently,
AWS does not have a solution for this. You can use open-source tools like TruffleHog to
scan for secrets in the EC2 instance. If a secret is found, then assign the role to the
instance.
From Console:
1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at
https://console.aws.amazon.com/ec2/.
2. In the left navigation panel, choose Instances.
3. Select the EC2 instance you want to examine.
4. Select Actions.
5. Select View details.
6. Select Security in the lower panel.
• If the value for Instance profile arn is an instance profile ARN, then an instance
profile (that contains an IAM role) is attached.
• If the value for IAM Role is blank, no role is attached.
• If the value for IAM Role contains a role, a role is attached.
Page 61
1. Run the describe-instances command to list all EC2 instance IDs in the
selected AWS region:
2. Run the describe-instances command again for each EC2 instance using the
IamInstanceProfile identifier in the query filter to check if an IAM role is
attached:
3. If an IAM role is attached, the command output will show the IAM instance profile
ARN and ID.
4. Repeat steps 2 and 3 for each EC2 instance in your AWS account.
Remediation:
From Console:
1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at
https://console.aws.amazon.com/ec2/.
2. In the left navigation panel, choose Instances.
3. Select the EC2 instance you want to modify.
4. Click Actions.
5. Click Security.
6. Click Modify IAM role.
7. Click Create new IAM role if a new IAM role is required.
8. Select the IAM role you want to attach to your instance in the IAM role
dropdown.
9. Click Update IAM role.
10. Repeat steps 3 to 9 for each EC2 instance in your AWS account that requires an
IAM role to be attached.
Page 62
1. Run the describe-instances command to list all EC2 instance IDs in the
selected AWS region:
3. Run the describe-instances command again for the recently modified EC2
instance. The command output should return the instance profile ARN and ID:
4. Repeat steps 2 and 3 for each EC2 instance in your AWS account that requires
an IAM role to be attached.
References:
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-
ec2.html
2. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-
ec2.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 63
Page 64
• Level 1
Description:
To enable HTTPS connections to your website or application in AWS, you need an
SSL/TLS server certificate. You can use AWS Certificate Manager (ACM) or IAM to
store and deploy server certificates. Use IAM as a certificate manager only when you
must support HTTPS connections in a region that is not supported by ACM. IAM
securely encrypts your private keys and stores the encrypted version in IAM SSL
certificate storage. IAM supports deploying server certificates in all regions, but you
must obtain your certificate from an external provider for use with AWS. You cannot
upload an ACM certificate to IAM. Additionally, you cannot manage your certificates
from the IAM Console.
Rationale:
Removing expired SSL/TLS certificates eliminates the risk that an invalid certificate will
be deployed accidentally to a resource such as AWS Elastic Load Balancer (ELB),
which can damage the credibility of the application/website behind the ELB. As a best
practice, it is recommended to delete expired certificates.
Impact:
Deleting the certificate could have implications for your application if you are using an
expired server certificate with Elastic Load Balancing, CloudFront, etc. You must make
configurations in the respective services to ensure there is no interruption in application
functionality.
Audit:
From Console:
Getting the certificate expiration information via the AWS Management Console is not
currently supported. To request information about the SSL/TLS certificates stored in
IAM through the AWS API, use the Command Line Interface (CLI).
From Command Line:
Run the list-server-certificates command to list all the IAM-stored server
certificates:
aws iam list-server-certificates
The command output should return an array that contains all the SSL/TLS certificates
currently stored in IAM and their metadata (name, ID, expiration date, etc):
Page 65
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_server-
certs.html
2. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/iam/delete-
server-certificate.html
Page 66
Controls
Control IG 1 IG 2 IG 3
Version
Page 67
• Level 1
Description:
Enable the IAM Access Analyzer for IAM policies regarding all resources in each active
AWS region.
IAM Access Analyzer is a technology introduced at AWS reinvent 2019. After the
Analyzer is enabled in IAM, scan results are displayed on the console showing the
accessible resources. Scans show resources that other accounts and federated users
can access, such as KMS keys and IAM roles. The results allow you to determine
whether an unintended user is permitted, making it easier for administrators to monitor
least privilege access. Access Analyzer analyzes only the policies that are applied to
resources in the same AWS Region.
Rationale:
AWS IAM Access Analyzer helps you identify the resources in your organization and
accounts, such as Amazon S3 buckets or IAM roles, that are shared with external
entities. This allows you to identify unintended access to your resources and data.
Access Analyzer identifies resources that are shared with external principals by using
logic-based reasoning to analyze the resource-based policies in your AWS
environment. IAM Access Analyzer continuously monitors all policies for S3 buckets,
IAM roles, KMS (Key Management Service) keys, AWS Lambda functions, and Amazon
SQS (Simple Queue Service) queues.
Audit:
From Console:
Page 68
If an Access Analyzer is not listed for each region or the status is not set to active refer
to the remediation procedure below.
Remediation:
From Console:
Perform the following to enable IAM Access Analyzer for IAM policies:
1. https://docs.aws.amazon.com/IAM/latest/UserGuide/what-is-access-
analyzer.html
2. https://docs.aws.amazon.com/IAM/latest/UserGuide/access-analyzer-getting-
started.html
3. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/accessanal
yzer/get-analyzer.html
4. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/accessanal
yzer/create-analyzer.html
Page 69
Controls
Control IG 1 IG 2 IG 3
Version
Page 70
• Level 2
Description:
In multi-account environments, IAM user centralization facilitates greater user control.
User access beyond the initial account is then provided via role assumption.
Centralization of users can be accomplished through federation with an external identity
provider or through the use of AWS Organizations.
Rationale:
Centralizing IAM user management to a single identity store reduces complexity and
thus the likelihood of access management errors.
Audit:
For multi-account AWS environments with an external identity provider:
1. Determine the master account for identity federation or IAM user management
2. Login to that account through the AWS Management Console
3. Click Services
4. Click IAM
5. Click Identity providers
6. Verify the configuration
For multi-account AWS environments with an external identity provider, as well as for
those implementing AWS Organizations without an external identity provider:
1. Determine all accounts that should not have local users present
2. Log into the AWS Management Console
3. Switch role into each identified account
4. Click Services
5. Click IAM
6. Click Users
7. Confirm that no IAM users representing individuals are present
Page 71
Controls
Control IG 1 IG 2 IG 3
Version
Page 72
• Level 1
Description:
AWS CloudShell is a convenient way of running CLI commands against AWS services;
a managed IAM policy ('AWSCloudShellFullAccess') provides full access to CloudShell,
which allows file upload and download capability between a user's local system and the
CloudShell environment. Within the CloudShell environment, a user has sudo
permissions and can access the internet. Therefore, it is feasible to install file transfer
software, for example, and move data from CloudShell to external internet servers.
Rationale:
Access to this policy should be restricted, as it presents a potential channel for data
exfiltration by malicious cloud admins who are given full permissions to the service.
AWS documentation describes how to create a more restrictive IAM policy that denies
file transfer permissions.
Audit:
From Console
1. List IAM policies, filter for the 'AWSCloudShellFullAccess' managed policy, and
note the "Arn" element value:
Page 73
References:
1. https://docs.aws.amazon.com/cloudshell/latest/userguide/sec-auth-with-
identities.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
2 Storage
This section contains recommendations for configuring AWS Storage.
Page 74
This section contains recommendations for configuring AWS Simple Storage Service
(S3) Buckets
Page 75
• Level 2
Description:
At the Amazon S3 bucket level, you can configure permissions through a bucket policy,
making the objects accessible only through HTTPS.
Rationale:
By default, Amazon S3 allows both HTTP and HTTPS requests. To ensure that access
to Amazon S3 objects is only permitted through HTTPS, you must explicitly deny HTTP
requests. Bucket policies that allow HTTPS requests without explicitly denying HTTP
requests will not comply with this recommendation.
Audit:
To allow access to HTTPS, you can use a bucket policy with the effect allow and a
condition that checks for the key "aws:SecureTransport": "true". This means that
HTTPS requests are allowed, but it does not deny HTTP requests. To explicitly deny
HTTP access, ensure that there is also a bucket policy with the effect deny that contains
the key "aws:SecureTransport": "false". You may also require TLS by setting a
policy to deny any version lower than the one you wish to require, using the condition
NumericLessThan and the key "s3:TlsVersion": "1.2".
From Console:
1. Login to the AWS Management Console and open the Amazon S3 console using
https://console.aws.amazon.com/s3/.
2. Select the check box next to the Bucket.
3. Click on 'Permissions', then click on Bucket Policy.
4. Ensure that a policy is listed that matches either:
{
"Sid": <optional>,
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket_name>/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
or
Page 76
aws s3 ls
Remediation:
From Console:
Page 77
{
"Sid": <optional>,
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket_name>/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
or
{
"Sid": "<optional>",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::<bucket_name>",
"arn:aws:s3:::<bucket_name>/*"
],
"Condition": {
"NumericLessThan": {
"s3:TlsVersion": "1.2"
}
}
}
6. Save
7. Repeat for all the buckets in your AWS account that contain sensitive data.
From Console
Using AWS Policy Generator:
•
Effect = Deny
Page 78
5. Generate Policy.
6. Copy the text and add it to the Bucket Policy.
{
"Sid": <optional>,
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": "arn:aws:s3:::<bucket_name>/*",
"Condition": {
"Bool": {
"aws:SecureTransport": "false"
}
}
}
or
Page 79
Default Value:
Both HTTP and HTTPS requests are allowed.
References:
1. https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-policy-for-
config-rule/
2. https://aws.amazon.com/blogs/security/how-to-use-bucket-policies-and-apply-
defense-in-depth-to-help-secure-your-amazon-s3-data/
3. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/s3api/get-
bucket-policy.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 80
• Level 2
Description:
Once MFA Delete is enabled on your sensitive and classified S3 bucket, it requires the
user to provide two forms of authentication.
Rationale:
Adding MFA delete to an S3 bucket requires additional authentication when you change
the version state of your bucket or delete an object version, adding another layer of
security in the event your security credentials are compromised or unauthorized access
is granted.
Impact:
Enabling MFA delete on an S3 bucket could require additional administrator oversight.
Enabling MFA delete may impact other services that automate the creation and/or
deletion of S3 buckets.
Audit:
Perform the steps below to confirm that MFA delete is configured on an S3 bucket:
From Console:
Page 81
• You cannot enable MFA Delete using the AWS Management Console; you must
use the AWS CLI or API.
• You must use your 'root' account to enable MFA Delete on S3 buckets.
References:
1. https://docs.aws.amazon.com/AmazonS3/latest/dev/Versioning.html#MultiFactor
AuthenticationDelete
2. https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMFADelete.html
3. https://aws.amazon.com/blogs/security/securing-access-to-aws-using-mfa-part-3/
4. https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_mfa_lost-or-
broken.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 82
• Level 2
Description:
Amazon S3 buckets can contain sensitive data that, for security purposes, should be
discovered, monitored, classified, and protected. Macie, along with other third-party
tools, can automatically provide an inventory of Amazon S3 buckets.
Rationale:
Using a cloud service or third-party software to continuously monitor and automate the
process of data discovery and classification for S3 buckets through machine learning
and pattern matching is a strong defense in protecting that information.
Amazon Macie is a fully managed data security and privacy service that uses machine
learning and pattern matching to discover and protect your sensitive data in AWS.
Impact:
There is a cost associated with using Amazon Macie, and there is typically a cost
associated with third-party tools that perform similar processes and provide protection.
Audit:
Perform the following steps to determine if Macie is running:
From Console:
When you log into the Macie console, if you are not taken to the summary page and do
not have a job set up and running, then refer to the remediation procedure below.
If you are using a third-party tool to manage and protect your S3 data, you meet this
recommendation.
Remediation:
Perform the steps below to enable and configure Amazon Macie:
From Console:
Page 83
1. In the left pane, click S3 buckets. Macie displays a list of all the S3 buckets for
your account.
2. Check the box for each bucket that you want Macie to analyze as part of the job.
3. Click Create job.
4. Click Quick create.
5. For the Name and Description step, enter a name and, optionally, a description
of the job.
6. Click Next.
7. For the Review and create step, click Submit.
If you are using a third-party tool to manage and protect your S3 data, follow the vendor
documentation for implementing and configuring that tool.
References:
1. https://aws.amazon.com/macie/getting-started/
2. https://docs.aws.amazon.com/workspaces/latest/adminguide/data-protection.html
3. https://docs.aws.amazon.com/macie/latest/user/data-classification.html
Page 84
Controls
Control IG 1 IG 2 IG 3
Version
Page 85
• Level 1
Description:
Amazon S3 provides Block public access (bucket settings) and Block public
access (account settings) to help you manage public access to Amazon S3
resources. By default, S3 buckets and objects are created with public access disabled.
However, an IAM principal with sufficient S3 permissions can enable public access at
the bucket and/or object level. While enabled, Block public access (bucket
settings) prevents an individual bucket and its contained objects from becoming
publicly accessible. Similarly, Block public access (account settings) prevents
all buckets and their contained objects from becoming publicly accessible across the
entire account.
Rationale:
Amazon S3 Block public access (bucket settings) prevents the accidental or
malicious public exposure of data contained within the respective bucket(s).
Amazon S3 Block public access (account settings) prevents the accidental or
malicious public exposure of data contained within all buckets of the respective AWS
account.
Whether to block public access to all or some buckets is an organizational decision that
should be based on data sensitivity, least privilege, and use case.
Impact:
When you apply Block Public Access settings to an account, the settings apply to all
AWS regions globally. The settings may not take effect in all regions immediately or
simultaneously, but they will eventually propagate to all regions.
Audit:
If utilizing Block Public Access (bucket settings)
From Console:
1. Login to the AWS Management Console and open the Amazon S3 console using
https://console.aws.amazon.com/s3/.
2. Select the check box next to a bucket.
3. Click on 'Edit public access settings'.
4. Ensure that the block public access settings are configured appropriately for this
bucket.
5. Repeat for all the buckets in your AWS account.
Page 86
aws s3 ls
1. Login to the AWS Management Console and open the Amazon S3 console using
https://console.aws.amazon.com/s3/.
2. Choose Block public access (account settings).
3. Ensure that the block public access settings are configured appropriately for your
AWS account.
Page 87
1. Login to the AWS Management Console and open the Amazon S3 console using
https://console.aws.amazon.com/s3/.
2. Select the check box next to a bucket.
3. Click 'Edit public access settings'.
4. Click 'Block all public access'
5. Repeat for all the buckets in your AWS account that contain sensitive data.
aws s3 ls
1. Login to the AWS Management Console and open the Amazon S3 console using
https://console.aws.amazon.com/s3/.
2. Click Block Public Access (account settings).
3. Click Edit to change the block public access settings for all the buckets in your
AWS account.
4. Update the settings and click Save. For details about each setting, pause on the
i icons.
5. When you're asked for confirmation, enter confirm. Then click Confirm to save
your changes.
Page 88
References:
1. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/block-public-access-
account.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 89
Page 90
• Level 1
Description:
Amazon RDS encrypted DB instances use the industry-standard AES-256 encryption
algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances.
After your data is encrypted, Amazon RDS handles the authentication of access and the
decryption of your data transparently, with minimal impact on performance.
Rationale:
Databases are likely to hold sensitive and critical data; therefore, it is highly
recommended to implement encryption to protect your data from unauthorized access
or disclosure. With RDS encryption enabled, the data stored on the instance's
underlying storage, the automated backups, read replicas, and snapshots are all
encrypted.
Audit:
From Console:
1. Login to the AWS Management Console and open the RDS dashboard at
https://console.aws.amazon.com/rds/.
2. In the navigation pane, under RDS dashboard, click Databases.
3. Select the RDS instance that you want to examine.
4. Click Instance Name to see details, then select the Configuration tab.
5. Under Configuration Details, in the Storage pane, search for the Encryption
Enabled status.
6. If the current status is set to Disabled, encryption is not enabled for the selected
RDS database instance.
7. Repeat steps 2 to 6 to verify the encryption status of other RDS instances in the
same region.
8. Change the region from the top of the navigation bar, and repeat the audit steps
for other regions.
Page 91
Remediation:
From Console:
1. Login to the AWS Management Console and open the RDS dashboard at
https://console.aws.amazon.com/rds/.
2. In the left navigation panel, click on Databases.
3. Select the Database instance that needs to be encrypted.
4. Click the Actions button placed at the top right and select Take Snapshot.
5. On the Take Snapshot page, enter the name of the database for which you want
to take a snapshot in the Snapshot Name field and click on Take Snapshot.
6. Select the newly created snapshot, click the Action button placed at the top
right, and select Copy snapshot from the Action menu.
7. On the Make Copy of DB Snapshot page, perform the following:
• In the New DB Snapshot Identifier field, enter a name for the new snapshot.
• Check Copy Tags. The new snapshot must have the same tags as the source
snapshot.
• Select Yes from the Enable Encryption dropdown list to enable encryption.
You can choose to use the AWS default encryption key or a custom key from the
Master Key dropdown list.
Page 92
3. Now run the list-aliases command to list the KMS key aliases available in a
specified region. The command output should return each key alias
currently available. For our RDS encryption activation process, locate the ID
of the AWS default KMS key:
4. Run the copy-db-snapshot command using the default KMS key ID for the RDS
instances returned earlier to create an encrypted copy of the database instance
snapshot. The command output will return the encrypted instance snapshot
configuration:
Page 93
References:
1. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryptio
n.html
2. https://aws.amazon.com/blogs/database/selecting-the-right-encryption-options-
for-amazon-rds-and-amazon-aurora-database-
engines/#:~:text=With%20RDS%2Dencrypted%20resources%2C%20data,transp
arent%20to%20your%20database%20engine.
3. https://aws.amazon.com/rds/features/security/
Page 94
Controls
Control IG 1 IG 2 IG 3
Version
Page 95
• Level 1
Description:
Ensure that RDS database instances have the Auto Minor Version Upgrade flag
enabled to automatically receive minor engine upgrades during the specified
maintenance window. This way, RDS instances can obtain new features, bug fixes, and
security patches for their database engines.
Rationale:
AWS RDS will occasionally deprecate minor engine versions and provide new ones for
upgrades. When the last version number within a release is replaced, the changed
version is considered minor. With the Auto Minor Version Upgrade feature enabled,
version upgrades will occur automatically during the specified maintenance window,
allowing your RDS instances to receive new features, bug fixes, and security patches
for their database engines.
Audit:
From Console:
1. Log in to the AWS management console and navigate to the RDS dashboard at
https://console.aws.amazon.com/rds/.
2. In the left navigation panel, click Databases.
3. Select the RDS instance that you want to examine.
4. Click on the Maintenance and backups panel.
5. Under the Maintenance section, search for the Auto Minor Version Upgrade
status.
• If the current status is Disabled, it means that the feature is not enabled, and the
minor engine upgrades released will not be applied to the selected RDS
instance.
Page 96
4. The command output should return the current status of the feature. If the current
status is set to true, the feature is enabled and the minor engine upgrades will
be applied to the selected RDS instance.
Remediation:
From Console:
1. Log in to the AWS management console and navigate to the RDS dashboard at
https://console.aws.amazon.com/rds/.
2. In the left navigation panel, click Databases.
3. Select the RDS instance that you want to update.
4. Click on the Modify button located at the top right side.
5. On the Modify DB Instance: <instance identifier> page, In the
Maintenance section, select Auto minor version upgrade and click the Yes
radio button.
6. At the bottom of the page, click Continue, and check Apply Immediately to
apply the changes immediately, or select Apply during the next scheduled
maintenance window to avoid any downtime.
7. Review the changes and click Modify DB Instance. The instance status should
change from available to modifying and back to available. Once the feature is
enabled, the Auto Minor Version Upgrade status should change to Yes.
Page 97
4. The command output should reveal the new configuration metadata for the RDS
instance, including the AutoMinorVersionUpgrade parameter value.
5. Run the describe-db-instances command to check if the Auto Minor Version
Upgrade feature has been successfully enabled:
6. The command output should return the feature's current status set to true,
indicating that the feature is enabled, and that the minor engine upgrades will be
applied to the selected RDS instance.
References:
1. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_RDS_Mana
ging.html
2. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDB
Instance.Upgrading.html
3. https://aws.amazon.com/rds/faqs/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 98
• Level 1
Description:
Ensure and verify that the RDS database instances provisioned in your AWS account
restrict unauthorized access in order to minimize security risks. To restrict access to any
RDS database instance, you must disable the Publicly Accessible flag for the database
and update the VPC security group associated with the instance.
Rationale:
Ensure that no public-facing RDS database instances are provisioned in your AWS
account, and restrict unauthorized access in order to minimize security risks. When the
RDS instance allows unrestricted access (0.0.0.0/0), anyone and anything on the
Internet can establish a connection to your database, which can increase the
opportunity for malicious activities such as brute force attacks, PostgreSQL injections,
or DoS/DDoS attacks.
Audit:
From Console:
1. Log in to the AWS management console and navigate to the RDS dashboard at
https://console.aws.amazon.com/rds/.
2. Under the navigation panel, on the RDS dashboard, click Databases.
3. Select the RDS instance that you want to examine.
4. Click Instance Name from the dashboard, under Connectivity and Security.
5. In the Security section, check if the Publicly Accessible flag status is set to Yes.
6. Follow the steps below to check database subnet access:
Page 99
4. Check the Publicly Accessible parameter status. If the Publicly Accessible flag is
set to Yes, then the selected RDS database instance is publicly accessible and
insecure. Follow the steps mentioned below to check database subnet access.
5. Run the describe-db-instances command again using the RDS database
instance identifier that you want to check, along with the appropriate filtering to
describe the VPC subnet(s) associated with the selected instance:
• The command output should list the subnets available in the selected database
subnet group.
Page 100
• If the command returns the route table associated with the database instance
subnet ID, check the values of the GatewayId and DestinationCidrBlock
attributes returned in the output. If the route table contains any entries with the
GatewayId value set to igw-xxxxxxxx and the DestinationCidrBlock value
set to 0.0.0.0/0, the selected RDS database instance was provisioned within a
public subnet.
• Or, if the command returns empty results, the route table is implicitly associated
with the subnet; therefore, the audit process continues with the next step.
• The command output should show the VPC ID in the selected database subnet
group.
• The command output returns the VPC main route table implicitly associated with
the database instance subnet ID. Check the values of the GatewayId and
DestinationCidrBlock attributes returned in the output. If the route table
contains any entries with the GatewayId value set to igw-xxxxxxxx and the
DestinationCidrBlock value set to 0.0.0.0/0, the selected RDS database
instance was provisioned inside a public subnet; therefore, it is not running within
a logically isolated environment and does not adhere to AWS security best
practices.
Remediation:
From Console:
Page 101
• Select the Connectivity and security tab, and click the VPC attribute value
inside the Networking section.
• Select the Details tab from the VPC dashboard's bottom panel and click the
Route table configuration attribute value.
• On the Route table details page, select the Routes tab from the dashboard's
bottom panel and click Edit routes.
• On the Edit routes page, update the Destination of Target which is set to igw-
xxxxx and click Save routes.
• Select Apply during the next scheduled maintenance window to apply the
changes automatically during the next scheduled maintenance window.
• Select Apply immediately to apply the changes right away. With this option,
any pending modifications will be asynchronously applied as soon as possible,
regardless of the maintenance window setting for this RDS database instance.
Note that any changes available in the pending modifications queue are also
applied. If any of the pending modifications require downtime, choosing this
option can cause unexpected downtime for the application.
8. Repeat steps 3-7 for each RDS instance in the current region.
9. Change the AWS region from the navigation bar to repeat the process for other
regions.
Page 102
References:
1. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.htm
l
2. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario2.html
3. https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_VPC.Worki
ngWithRDSInstanceinaVPC.html
4. https://aws.amazon.com/rds/faqs/
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 103
Page 104
• Level 1
Description:
Amazon RDS offers Multi-AZ deployments that provide enhanced availability and
durability for your databases, using synchronous replication to replicate data to a
standby instance in a different Availability Zone (AZ). In the event of an infrastructure
failure, Amazon RDS automatically fails over to the standby to minimize downtime and
ensure business continuity.
Rationale:
Database availability is crucial for maintaining service uptime, particularly for
applications that are critical to the business. Implementing Multi-AZ deployments with
Amazon RDS ensures that your databases are protected against unplanned outages
due to hardware failures, network issues, or other disruptions. This configuration
enhances both the availability and durability of your database, making it a highly
recommended practice for production environments.
Impact:
Multi-AZ deployments may increase costs due to the additional resources required to
maintain a standby instance; however, the benefits of increased availability and reduced
risk of downtime outweigh these costs for critical applications.
Audit:
From Console:
1. Login to the AWS Management Console and open the RDS dashboard at AWS
RDS Console.
2. In the navigation pane, under Databases, select the RDS instance you want to
examine.
3. Click the Instance Name to see details, then navigate to the Configuration
tab.
4. Under the Availability & Durability section, check the Multi-AZ status.
o If Multi-AZ deployment is enabled, it will display Yes.
o If it is disabled, the status will display No.
5. Repeat steps 2-4 to verify the Multi-AZ status of other RDS instances in the
same region.
6. Change the region from the top of the navigation bar and repeat the audit for
other regions.
Page 105
1. Run the following command to list all RDS instances in the selected AWS region:
2. Run the following command using the instance identifier returned earlier to check
the Multi-AZ status:
Remediation:
From Console:
1. Login to the AWS Management Console and open the RDS dashboard at AWS
RDS Console.
2. In the left navigation pane, click on Databases.
3. Select the database instance that needs Multi-AZ deployment to be enabled.
4. Click the Modify button at the top right.
5. Scroll down to the Availability & Durability section.
6. Under Multi-AZ deployment, select Yes to enable.
7. Review the changes and click Continue.
8. On the Review page, choose Apply immediately to make the change without
waiting for the next maintenance window, or Apply during the next
scheduled maintenance window.
9. Click Modify DB Instance to apply the changes.
1. Run the following command to modify the RDS instance and enable Multi-AZ:
Page 106
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 107
Page 108
• Level 1
Description:
EFS data should be encrypted at rest using AWS KMS (Key Management Service).
Rationale:
Data should be encrypted at rest to reduce the risk of a data breach via direct access to
the storage device.
Audit:
From Console:
1. Login to the AWS Management Console and Navigate to the Elastic File System
(EFS) dashboard.
2. Select File Systems from the left navigation panel.
3. Each item on the list has a visible Encrypted field that displays data at rest
encryption status.
4. Validate that this field reads Encrypted for all EFS file systems in all AWS
regions.
From CLI:
1. Run the describe-file-systems command using custom query filters to list the
identifiers of all AWS EFS file systems currently available within the selected
region:
2. The command output should return a table with the requested file system IDs.
3. Run the describe-file-systems command using the ID of the file system that
you want to examine as file-system-id and the necessary query filters:
Page 109
Remediation:
It is important to note that EFS file system data-at-rest encryption must be turned
on when creating the file system. If an EFS file system has been created without
data-at-rest encryption enabled, then you must create another EFS file system
with the correct configuration and transfer the data.
Steps to create an EFS file system with data encrypted at rest:
From Console:
1. Login to the AWS Management Console and Navigate to the Elastic File
System (EFS) dashboard.
2. Select File Systems from the left navigation panel.
3. Click the Create File System button from the dashboard top menu to start the
file system setup process.
4. On the Configure file system access configuration page, perform the
following actions:
6. Review the file system configuration details on the review and create page
and then click Create File System to create your new AWS EFS file system.
7. Copy the data from the old unencrypted EFS file system onto the newly created
encrypted file system.
Page 110
From CLI:
5. The command output should return the new file system configuration metadata.
6. Run the create-mount-target command using the EFS file system ID returned
from step 4 as the identifier and the ID of the Availability Zone (AZ) that will
represent the mount target:
7. The command output should return the new mount target metadata.
8. Now you can mount your file system from an EC2 instance.
9. Copy the data from the old unencrypted EFS file system to the newly created
encrypted file system.
10. Remove the unencrypted file system as soon as your data migration to the newly
created encrypted file system is completed:
Page 111
Default Value:
EFS file system data is encrypted at rest by default when creating a file system through
the Console. However, encryption at rest is not enabled by default when creating a new
file system using the AWS CLI, API, or SDKs.
References:
1. https://docs.aws.amazon.com/efs/latest/ug/encryption-at-rest.html
2. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/efs/index.ht
ml#efs
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 112
Page 113
• Level 1
Description:
AWS CloudTrail is a web service that records AWS API calls for your account and
delivers log files to you. The recorded information includes the identity of the API caller,
the time of the API call, the source IP address of the API caller, the request parameters,
and the response elements returned by the AWS service. CloudTrail provides a history
of AWS API calls for an account, including API calls made via the Management
Console, SDKs, command line tools, and higher-level AWS services (such as
CloudFormation).
Rationale:
The AWS API call history produced by CloudTrail enables security analysis, resource
change tracking, and compliance auditing. Additionally,
• ensuring that a multi-region trail exists will help detect unexpected activity
occurring in otherwise unused regions
• ensuring that a multi-region trail exists will ensure that Global Service
Logging is enabled for a trail by default to capture recordings of events
generated on AWS global services
• for a multi-region trail, ensuring that management events are configured for all
types of Read/Writes ensures the recording of management operations that are
performed on all resources in an AWS account
Impact:
S3 lifecycle features can be used to manage the accumulation and management of logs
over time. See the following AWS resource for more information on these features:
1. https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lifecycle-mgmt.html
Audit:
Perform the following to determine if CloudTrail is enabled for all regions:
From Console:
1. Sign in to the AWS Management Console and open the CloudTrail console at
https://console.aws.amazon.com/cloudtrail
2. Click on Trails in the left navigation pane
Page 114
4. Ensure there is at least one fieldSelector for a trail that equals Management:
• This should NOT output any results for Field: "readOnly". If either true or false
is returned, one of the checkboxes (read or write) is not selected.
Remediation:
Perform the following to enable global (Multi-region) CloudTrail logging:
From Console:
Page 115
Default Value:
Not Enabled
References:
1. CCE-78913-1
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-
concepts.html#cloudtrail-concepts-management-events
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-
management-and-data-events-with-
cloudtrail.html?icmpid=docs_cloudtrail_console#logging-management-events
4. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-supported-
services.html#cloud-trail-supported-services-data-events
Page 116
Controls
Control IG 1 IG 2 IG 3
Version
Page 117
• Level 2
Description:
CloudTrail log file validation creates a digitally signed digest file containing a hash of
each log that CloudTrail writes to S3. These digest files can be used to determine
whether a log file was changed, deleted, or remained unchanged after CloudTrail
delivered the log. It is recommended that file validation be enabled for all CloudTrails.
Rationale:
Enabling log file validation will provide additional integrity checks for CloudTrail logs.
Audit:
Perform the following on each trail to determine if log file validation is enabled:
From Console:
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/cloudtrail.
2. Click on Trails in the left navigation pane.
3. For every trail:
Remediation:
Perform the following to enable log file validation on a given trail:
From Console:
1. Sign in to the AWS Management Console and open the IAM console at
https://console.aws.amazon.com/cloudtrail.
2. Click on Trails in the left navigation pane.
3. Click on the target trail.
4. Within the General details section, click edit.
5. Under Advanced settings, check the enable box under Log file
validation.
Page 118
Default Value:
Not Enabled
References:
1. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudtrail-log-file-
validation-enabling.html
2. CCE-78914-9
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 119
• Level 2
Description:
AWS Config is a web service that performs configuration management of supported
AWS resources within your account and delivers log files to you. The recorded
information includes the configuration items (AWS resources), relationships between
configuration items (AWS resources), and any configuration changes between
resources. It is recommended that AWS Config be enabled in all regions.
Rationale:
The AWS configuration item history captured by AWS Config enables security analysis,
resource change tracking, and compliance auditing.
Impact:
Enabling AWS Config in all regions provides comprehensive visibility into resource
configurations, enhancing security and compliance monitoring. However, this may incur
additional costs and require proper configuration management.
Audit:
Process to evaluate AWS Config configuration per region:
From Console:
1. Sign in to the AWS Management Console and open the AWS Config console at
https://console.aws.amazon.com/config/.
2. On the top right of the console select the target region.
3. If a Config Recorder is enabled in this region, you should navigate to the Settings
page from the navigation menu on the left-hand side. If a Config Recorder is not
yet enabled in this region, proceed to the remediation steps.
4. Ensure "Record all resources supported in this region" is checked.
5. Ensure "Include global resources (e.g., AWS IAM resources)" is checked, unless
it is enabled in another region (this is only required in one region).
6. Ensure the correct S3 bucket has been defined.
7. Ensure the correct SNS topic has been defined.
8. Repeat steps 2 to 7 for each region.
1. Run this command to show all AWS Config Recorders and their properties:
Page 120
2. Evaluate the output to ensure that all recorders have a recordingGroup object
which includes "allSupported": true. Additionally, ensure that at least one
recorder has "includeGlobalResourceTypes": true.
3. Run this command to show the status for all AWS Config Recorders:
4. In the output, find recorders with name key matching the recorders that were
evaluated in step 2. Ensure that they include "recording": true and
"lastStatus": "SUCCESS".
Remediation:
To implement AWS Config configuration:
From Console:
1. Select the region you want to focus on in the top right of the console.
2. Click Services.
3. Click Config.
4. If a Config Recorder is enabled in this region, navigate to the Settings page from
the navigation menu on the left-hand side. If a Config Recorder is not yet enabled
in this region, select "Get Started".
5. Select "Record all resources supported in this region".
Page 121
1. Ensure there is an appropriate S3 bucket, SNS topic, and IAM role per the AWS
Config Service prerequisites.
2. Run this command to create a new configuration recorder:
3. Create a delivery channel configuration file locally which specifies the channel
attributes, populated from the prerequisites set up previously:
{
"name": "<delivery-channel-name>",
"s3BucketName": "<bucket-name>",
"snsTopicARN": "arn:aws:sns:<region>:<account-id>:<sns-topic>",
"configSnapshotDeliveryProperties": {
"deliveryFrequency": "Twelve_Hours"
}
}
4. Run this command to create a new delivery channel, referencing the json
configuration file made in the previous step:
References:
1. CCE-78917-2
2. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configservi
ce/describe-configuration-recorder-status.html
3. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configservi
ce/describe-configuration-recorders.html
Page 122
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 123
• Level 1
Description:
Server access logging generates a log that contains access records for each request
made to your S3 bucket. An access log record contains details about the request, such
as the request type, the resources specified in the request worked, and the time and
date the request was processed. It is recommended that server access logging be
enabled on the CloudTrail S3 bucket.
Rationale:
By enabling server access logging on target S3 buckets, it is possible to capture all
events that may affect objects within any target bucket. Configuring the logs to be
placed in a separate bucket allows access to log information that can be useful in
security and incident response workflows.
Audit:
Perform the following ensure that the CloudTrail S3 bucket has access logging is
enabled:
From Console:
Page 124
Remediation:
Perform the following to enable server access logging:
From Console:
2. Copy and add the target bucket name at <bucket-name>, the prefix for the log
file at <log-file-prefix>, and optionally add an email address in the following
template, then save it as <file-name>.json:
Page 125
Default Value:
Logging is disabled.
References:
1. CCE-78918-0
2. https://docs.aws.amazon.com/AmazonS3/latest/dev/ServerLogs.html
3. https://docs.aws.amazon.com/AmazonS3/latest/userguide/enable-server-access-
logging.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 126
Page 127
• Level 2
Description:
AWS CloudTrail is a web service that records AWS API calls for an account and makes
those logs available to users and resources in accordance with IAM policies. AWS Key
Management Service (KMS) is a managed service that helps create and control the
encryption keys used to encrypt account data, and uses Hardware Security Modules
(HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to
leverage server side encryption (SSE) and KMS customer-created master keys (CMK)
to further protect CloudTrail logs. It is recommended that CloudTrail be configured to
use SSE-KMS.
Rationale:
Configuring CloudTrail to use SSE-KMS provides additional confidentiality controls on
log data, as a given user must have S3 read permission on the corresponding log
bucket and must be granted decrypt permission by the CMK policy.
Impact:
Customer-created keys incur an additional cost. See
https://aws.amazon.com/kms/pricing/ for more information.
Audit:
Perform the following to determine if CloudTrail is configured to use SSE-KMS:
From Console:
1. Sign in to the AWS Management Console and open the CloudTrail console at
https://console.aws.amazon.com/cloudtrail.
2. In the left navigation pane, choose Trails.
3. Select a trail.
4. In the General details section, select Edit to edit the trail configuration.
5. Ensure the box at Log file SSE-KMS encryption is checked and that a valid
AWS KMS alias of a KMS key is entered in the respective text box.
Page 128
Remediation:
Perform the following to configure CloudTrail to use SSE-KMS:
From Console:
1. Sign in to the AWS Management Console and open the CloudTrail console at
https://console.aws.amazon.com/cloudtrail.
2. In the left navigation pane, choose Trails.
3. Click on a trail.
4. Under the S3 section, click the edit button (pencil icon).
5. Click Advanced.
6. Select an existing CMK from the KMS key Id drop-down menu.
•
Note: Ensure the CMK is located in the same region as the S3 bucket.
•
Note: You will need to apply a KMS key policy on the selected CMK in order for
CloudTrail, as a service, to encrypt and decrypt log files using the CMK provided.
View the AWS documentation for editing the selected CMK Key policy.
7. Click Save.
8. You will see a notification message stating that you need to have decryption
permissions on the specified KMS key to decrypt log files.
9. Click Yes.
References:
1. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/encrypting-
cloudtrail-log-files-with-aws-kms.html
2. https://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html
3. CCE-78919-8
4. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/cloudtrail/u
pdate-trail.html
Page 129
Additional Information:
Three statements that need to be added to the CMK policy:
1. Enable CloudTrail to describe CMK properties:
<pre class="programlisting" style="font-style: normal;">{
"Sid": "Allow CloudTrail access",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "kms:DescribeKey",
"Resource": "*"
}
2. Granting encrypt permissions:
<pre class="programlisting" style="font-style: normal;">{
"Sid": "Allow CloudTrail to encrypt logs",
"Effect": "Allow",
"Principal": {
"Service": "cloudtrail.amazonaws.com"
},
"Action": "kms:GenerateDataKey*",
"Resource": "*",
"Condition": {
"StringLike": {
"kms:EncryptionContext:aws:cloudtrail:arn": [
"arn:aws:cloudtrail:*:aws-account-id:trail/*"
]
}
}
}
3. Granting decrypt permissions:
Page 130
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 131
• Level 2
Description:
AWS Key Management Service (KMS) allows customers to rotate the backing key,
which is key material stored within the KMS that is tied to the key ID of the customer-
created customer master key (CMK). The backing key is used to perform cryptographic
operations such as encryption and decryption. Automated key rotation currently retains
all prior backing keys so that decryption of encrypted data can occur transparently. It is
recommended that CMK key rotation be enabled for symmetric keys. Key rotation
cannot be enabled for any asymmetric CMK.
Rationale:
Rotating encryption keys helps reduce the potential impact of a compromised key, as
data encrypted with a new key cannot be accessed with a previous key that may have
been exposed. Keys should be rotated every year or upon an event that could result in
the compromise of that key.
Impact:
Creation, management, and storage of CMKs may require additional time from an
administrator.
Audit:
From Console:
1. Sign in to the AWS Management Console and open the KMS console at:
https://console.aws.amazon.com/kms.
2. In the left navigation pane, click Customer-managed keys.
3. Select a customer-managed CMK where Key spec = SYMMETRIC_DEFAULT.
4. Select the Key rotation tab.
5. Ensure the Automatically rotate this KMS key every year box is
checked.
6. Repeat steps 3–5 for all customer-managed CMKs where Key spec =
SYMMETRIC_DEFAULT.
1. Run the following command to get a list of all keys and their associated KeyIds:
Page 132
2. For each key, note the KeyId and run the following command:
Remediation:
From Console:
1. Sign in to the AWS Management Console and open the KMS console at:
https://console.aws.amazon.com/kms.
2. In the left navigation pane, click Customer-managed keys.
3. Select a key with Key spec = SYMMETRIC_DEFAULT that does not have
automatic rotation enabled.
4. Select the Key rotation tab.
5. Check the Automatically rotate this KMS key every year box.
6. Click Save.
7. Repeat steps 3–6 for all customer-managed CMKs that do not have automatic
rotation enabled.
References:
1. https://aws.amazon.com/kms/pricing/
2. https://csrc.nist.gov/publications/detail/sp/800-57-part-1/rev-5/final
3. CCE-78920-6
Page 133
Controls
Control IG 1 IG 2 IG 3
Version
Page 134
• Level 2
Description:
VPC Flow Logs is a feature that enables you to capture information about the IP traffic
going to and from network interfaces in your VPC. After you've created a flow log, you
can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that
VPC Flow Logs be enabled for packet "Rejects" for VPCs.
Rationale:
VPC Flow Logs provide visibility into network traffic that traverses the VPC and can be
used to detect anomalous traffic or gain insights during security workflows.
Impact:
By default, CloudWatch Logs will store logs indefinitely unless a specific retention
period is defined for the log group. When choosing the number of days to retain, keep in
mind that the average time it takes for an organization to realize they have been
breached is 210 days (at the time of this writing). Since additional time is required to
research a breach, a minimum retention policy of 365 days allows for detection and
investigation. You may also wish to archive the logs to a cheaper storage service rather
than simply deleting them. See the following AWS resource to manage CloudWatch
Logs retention periods:
1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/Settin
gLogRetention.html
Audit:
Perform the following to determine if VPC Flow logs are enabled:
From Console:
Page 135
2. The command output returns the VpcId of VPCs available in the selected region.
3. Run the describe-flow-logs command (OSX/Linux/UNIX) using the VPC ID to
determine if the selected virtual network has the Flow Logs feature enabled:
• If there are no Flow Logs created for the selected VPC, the command output will
return an empty list [].
Remediation:
Perform the following to enable VPC Flow Logs:
From Console:
Note: Setting the filter to "Reject" will dramatically reduce the accumulation of logging
data for this recommendation and provide sufficient information for the purposes of
breach detection, research, and remediation. However, during periods of least privilege
security group engineering, setting the filter to "All" can be very helpful in discovering
existing traffic flows required for the proper operation of an already running
environment.
From Command Line:
Page 136
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action":[
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents",
"logs:GetLogEvents",
"logs:FilterLogEvents"
],
"Resource": "*"
}
]
}
Page 137
6. Run the describe-vpcs command to get a list of VPCs in the selected region:
• The command output should return a list of VPCs in the selected region.
References:
1. CCE-79202-8
2. https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/flow-logs.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 138
Page 139
• Level 2
Description:
S3 object-level API operations, such as GetObject, DeleteObject, and PutObject, are
referred to as data events. By default, CloudTrail trails do not log data events, so it is
recommended to enable object-level logging for S3 buckets.
Rationale:
Enabling object-level logging will help you meet data compliance requirements within
your organization, perform comprehensive security analyses, monitor specific patterns
of user behavior in your AWS account, or take immediate actions on any object-level
API activity within your S3 buckets using Amazon CloudWatch Events.
Impact:
Enabling logging for these object-level events may significantly increase the number of
events logged and may incur additional costs.
Audit:
From Console:
Data Events: S3
Log selector template
Log all events
Page 140
6. Repeat steps 2-5 to verify that each trail has multi-region enabled and is
configured to log data events. If a trail does not have multi-region enabled and
data event logging configured, refer to the remediation steps.
"TrailARN": "arn:aws:cloudtrail:<region>:<account#>:trail/<trail-name>",
"Name": "<trail-name>",
"HomeRegion": "<region>"
6. The command output should be an array that includes the S3 bucket defined for
data event logging:
"Type": "AWS::S3::Object",
"Values": [
"arn:aws:s3"
Page 141
Remediation:
From Console:
1. To enable object-level data events logging for S3 buckets within your AWS
account, run the put-event-selectors command using the name of the trail
that you want to reconfigure as identifier:
Page 142
1. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-
events.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 143
• Level 2
Description:
S3 object-level API operations, such as GetObject, DeleteObject, and PutObject, are
referred to as data events. By default, CloudTrail trails do not log data events, so it is
recommended to enable object-level logging for S3 buckets.
Rationale:
Enabling object-level logging will help you meet data compliance requirements within
your organization, perform comprehensive security analyses, monitor specific patterns
of user behavior in your AWS account, or take immediate actions on any object-level
API activity within your S3 buckets using Amazon CloudWatch Events.
Impact:
Enabling logging for these object-level events may significantly increase the number of
events logged and may incur additional costs.
Audit:
From Console:
Data Events: S3
Log selector template
Log all events
Page 144
6. Repeat steps 2-5 to verify that each trail has multi-region enabled and is
configured to log data events. If a trail does not have multi-region enabled and
data event logging configured, refer to the remediation steps.
4. The command output should be an array that includes the S3 bucket defined for
data event logging.
5. If the get-event-selectors command returns an empty array, data events are
not included in the trail's logging configuration; therefore, object-level API
operations performed on S3 buckets within your AWS account are not being
recorded.
6. Repeat steps 1-5 to verify the configuration of each trail.
7. Change the AWS region by updating the --region command parameter, and
perform the audit process for other regions.
Remediation:
From Console:
Page 145
1. To enable object-level data events logging for S3 buckets within your AWS
account, run the put-event-selectors command using the name of the trail
that you want to reconfigure as identifier:
References:
1. https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-
events.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 146
Page 147
Page 148
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for unauthorized API
calls.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring unauthorized API calls will help reduce the time it takes to detect malicious
activity and can alert you to potential security incidents.
Impact:
This alert may be triggered by normal read-only console activities that attempt to
opportunistically gather optional information but gracefully fail if they lack the necessary
permissions.
If an excessive number of alerts are generated, then an organization may wish to
consider adding read access to the limited IAM user permissions solely to reduce the
number of alerts.
In some cases, doing this may allow users to actually view some areas of the system;
any additional access granted should be reviewed for alignment with the original limited
IAM user intent.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 149
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Page 150
1. Create a metric filter based on the provided filter pattern that checks for
unauthorized API calls and uses the <trail-log-group-name> taken from audit
step 1:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
Page 151
References:
1. https://aws.amazon.com/sns/
2. CCE-79186-3
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
4. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
5. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 152
Page 153
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for console logins that
are not protected by multi-factor authentication (MFA).
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring for single-factor console logins will increase visibility into accounts that are
not protected by MFA. These type of accounts are more susceptible to compromise and
unauthorized access.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 154
3. Ensure the output from the above command contains the following:
Or, to reduce false positives in case Single Sign-On (SSO) is used in the
organization:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Page 155
1. Create a metric filter based on the provided filter pattern that checks for AWS
Management Console sign-ins without MFA and uses the <trail-log-group-
name> taken from audit step 1.
Or, to reduce false positives in case Single Sign-On (SSO) is used in the
organization:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
Page 156
References:
1. https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/viewin
g_metrics_with_cloudwatch.html
2. CCE-79187-1
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
4. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
5. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 157
Page 158
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for 'root' login attempts
to detect unauthorized use or attempts to use the root account.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring 'root' account logins will provide visibility into the use of a fully privileged
account and the opportunity to reduce its usage.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 159
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for 'root'
account usage and uses the <trail-log-group-name> taken from audit step 1:
Page 160
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79188-9
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 161
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 162
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for changes made to
Identity and Access Management (IAM) policies.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to IAM policies will help ensure authentication and authorization
controls remain intact.
Impact:
Monitoring these changes may result in a number of "false positives," especially in
larger environments. This alert may require more tuning than others to eliminate some
of those erroneous notifications.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 163
3. Ensure the output from the above command contains the following:
"filterPattern":
"{($.eventName=DeleteGroupPolicy)||($.eventName=DeleteRolePolicy)||($.e
ventName=DeleteUserPolicy)||($.eventName=PutGroupPolicy)||($.eventName=
PutRolePolicy)||($.eventName=PutUserPolicy)||($.eventName=CreatePolicy)
||($.eventName=DeletePolicy)||($.eventName=CreatePolicyVersion)||($.eve
ntName=DeletePolicyVersion)||($.eventName=AttachRolePolicy)||($.eventNa
me=DetachRolePolicy)||($.eventName=AttachUserPolicy)||($.eventName=Deta
chUserPolicy)||($.eventName=AttachGroupPolicy)||($.eventName=DetachGrou
pPolicy)}"
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
Page 164
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for IAM
policy changes and the <trail-log-group-name> taken from audit step 1:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
Page 165
References:
1. CCE-79189-7
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 166
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be used to detect changes to
CloudTrail's configurations.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to CloudTrail's configuration will help ensure sustained visibility into
the activities performed in the AWS account.
Impact:
Ensuring that changes to CloudTrail configurations are monitored enhances security by
maintaining the integrity of logging mechanisms. Automated monitoring can provide
real-time alerts; however, it may require additional setup and resources to configure and
manage these alerts effectively. These steps can be performed manually within a
company's existing SIEM platform in cases where CloudTrail logs are monitored outside
of the AWS monitoring tools in CloudWatch.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 167
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
Page 168
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79190-5
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
Page 169
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 170
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for failed console
authentication attempts.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring failed console logins may decrease the lead time to detect an attempt to
brute-force a credential, which may provide an indicator, such as the source IP address,
that can be used in other event correlations.
Impact:
Monitoring for these failures may generate a large number of alerts, especially in larger
environments.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 171
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
Page 172
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79191-3
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
Page 173
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 174
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for customer-created
CMKs that have changed state to disabled or are scheduled for deletion.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Data encrypted with disabled or deleted keys will no longer be accessible. Changes in
the state of a CMK should be monitored to ensure that the change is intentional.
Impact:
Creation, storage, and management of CMK may require additional labor compared to
the use of AWS-managed keys.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 175
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for CMKs
that have been disabled or scheduled for deletion and uses the <trail-log-
group-name> taken from audit step 1:
Page 176
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79192-1
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 177
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 178
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for changes to S3 bucket
policies.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to S3 bucket policies may reduce the time it takes to detect and
correct permissive policies on sensitive S3 buckets.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 179
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for changes
to S3 bucket policies and uses the <trail-log-group-name> taken from audit
step 1:
Page 180
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79193-9
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 181
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 182
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for detecting changes to
AWS Config's configurations.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to the AWS Config configuration will help ensure sustained visibility
of the configuration items within the AWS account.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 183
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for AWS
Configuration changes and uses the <trail-log-group-name> taken from audit
step 1:
Page 184
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79194-7
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 185
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 186
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
Security groups are stateful packet filters that control ingress and egress traffic within a
VPC.
It is recommended that a metric filter and alarm be established to detect changes to
security groups.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to security groups will help ensure that resources and services are
not unintentionally exposed.
Impact:
This may require additional 'tuning' to eliminate false positives and filter out expected
activity so that anomalies are easier to detect.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 187
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
Page 188
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79195-4
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
Page 189
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 190
• Level 2
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
NACLs are used as a stateless packet filter to control ingress and egress traffic for
subnets within a VPC. It is recommended that a metric filter and alarm be established
for any changes made to NACLs.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to NACLs will help ensure that AWS resources and services are not
unintentionally exposed.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 191
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for NACL
changes and uses the <trail-log-group-name> taken from audit step 1:
Page 192
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79196-2
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 193
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 194
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
Network gateways are required to send and receive traffic to a destination outside of a
VPC. It is recommended that a metric filter and alarm be established for changes to
network gateways.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to network gateways will help ensure that all ingress/egress traffic
traverses the VPC border via a controlled path.
Impact:
Monitoring changes to network gateways helps detect unauthorized modifications that
could compromise network security. Implementing automated monitoring and alerts can
improve incident response times, but it may require additional configuration and
maintenance efforts.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 195
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Page 196
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for network
gateways changes and uses the <trail-log-group-name> taken from audit
step 1:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
Page 197
References:
1. CCE-79197-0
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 198
Page 199
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
Routing tables are used to route network traffic between subnets and to network
gateways.
It is recommended that a metric filter and alarm be established for changes to route
tables.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring changes to route tables will help ensure that all VPC traffic flows through the
expected path and prevent any accidental or intentional modifications that may lead to
uncontrolled network traffic. An alarm should be triggered every time an AWS API call is
performed to create, replace, delete, or disassociate a route table.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 200
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Remediation:
If you are using CloudTrail trails and CloudWatch, perform the following steps to set up
the metric filter, alarm, SNS topic, and subscription:
1. Create a metric filter based on the provided filter pattern that checks for route
table changes and uses the <trail-log-group-name> taken from audit step 1:
Page 201
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
References:
1. CCE-79198-8
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Page 202
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 203
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is possible to have more than one VPC within an account; additionally, it is also
possible to create a peer connection between two VPCs, enabling network traffic to
route between them.
It is recommended that a metric filter and alarm be established for changes made to
VPCs.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
VPCs in AWS are logically isolated virtual networks that can be used to launch AWS
resources. Monitoring changes to VPC configurations will help ensure that VPC traffic
flow is not negatively impacted. Changes to VPCs can affect network accessibility from
the public internet and additionally impact VPC traffic flow to and from the resources
launched in the VPC.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 204
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Page 205
1. Create a metric filter based on the provided filter pattern that checks for VPC
changes and uses the <trail-log-group-name> taken from audit step 1:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
Page 206
References:
1. CCE-79199-6
2. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/receive-cloudtrail-
log-files-from-multiple-regions.html
3. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
4. https://docs.aws.amazon.com/sns/latest/dg/SubscribeTopic.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 207
• Level 1
Description:
Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to
CloudWatch Logs or an external Security Information and Event Management (SIEM)
environment, and establishing corresponding metric filters and alarms.
It is recommended that a metric filter and alarm be established for changes made to
AWS Organizations in the master AWS account.
Rationale:
CloudWatch is an AWS native service that allows you to observe and monitor resources
and applications. CloudTrail logs can also be sent to an external Security Information
and Event Management (SIEM) environment for monitoring and alerting.
Monitoring AWS Organizations changes can help you prevent unwanted, accidental, or
intentional modifications that may lead to unauthorized access or other security
breaches. This monitoring technique helps ensure that any unexpected changes made
within your AWS Organizations can be investigated and that any unwanted changes
can be rolled back.
Audit:
If you are using CloudTrail trails and CloudWatch, perform the following to ensure that
there is at least one active multi-region CloudTrail trail with the prescribed metric filters
and alarms configured:
1. Identify the log group name that is configured for use with the active multi-region
CloudTrail trail:
Page 208
3. Ensure the output from the above command contains the following:
6. Note the AlarmActions value; this will provide the SNS topic ARN value.
7. Ensure there is at least one active subscriber to the SNS topic:
• At least one subscription should have "SubscriptionArn" with a valid AWS ARN.
o Example of valid "SubscriptionArn": arn:aws:sns:<region>:<account-
id>:<sns-topic-name>:<subscription-id>
Page 209
1. Create a metric filter based on the provided filter pattern that checks for AWS
Organizations changes and uses the <trail-log-group-name> taken from
audit step 1:
Note: You can choose your own metricName and metricNamespace strings.
Using the same metricNamespace for all Foundations Benchmark metrics will
group them together.
Note: You can execute this command once and then reuse the same topic for all
monitoring alarms.
Note: Capture the TopicArn that is displayed when creating the SNS topic in
step 2.
Note: You can execute this command once and then reuse the same
subscription for all monitoring alarms.
4. Create an alarm that is associated with the CloudWatch Logs metric filter created
in step 1 and the SNS topic created in step 2:
Page 210
References:
1. https://docs.aws.amazon.com/awscloudtrail/latest/userguide/cloudwatch-alarms-
for-cloudtrail.html
2. https://docs.aws.amazon.com/organizations/latest/userguide/orgs_security_incid
ent-response.html
Additional Information:
Configuring a log metric filter and alarm on a multi-region (global) CloudTrail trail:
• ensures that activities from all regions (both used and unused) are monitored
• ensures that activities on all supported global services are monitored
• ensures that all management events across all regions are monitored
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 211
• Level 2
Description:
Security Hub collects security data from various AWS accounts, services, and
supported third-party partner products, helping you analyze your security trends and
identify the highest-priority security issues. When you enable Security Hub, it begins to
consume, aggregate, organize, and prioritize findings from the AWS services that you
have enabled, such as Amazon GuardDuty, Amazon Inspector, and Amazon Macie.
You can also enable integrations with AWS partner security products.
Rationale:
AWS Security Hub provides you with a comprehensive view of your security state in
AWS and helps you check your environment against security industry standards and
best practices, enabling you to quickly assess the security posture across your AWS
accounts.
Impact:
It is recommended that AWS Security Hub be enabled in all regions. AWS Security Hub
requires that AWS Config be enabled.
Audit:
Follow this process to evaluate AWS Security Hub configuration per region:
From Console:
1. Sign in to the AWS Management Console and open the AWS Security Hub
console at https://console.aws.amazon.com/securityhub/.
2. On the top right of the console, select the target Region.
3. If the Security Hub > Summary page is displayed, then Security Hub is set up for
the selected region.
4. If presented with "Setup Security Hub" or "Get Started With Security Hub," refer
to the remediation steps.
5. Repeat steps 2 to 4 for each region.
Page 212
Remediation:
To grant the permissions required to enable Security Hub, attach the Security Hub
managed policy AWSSecurityHubFullAccess to an IAM user, group, or role.
Enabling Security Hub:
From Console:
1. Use the credentials of the IAM identity to sign in to the Security Hub console.
2. When you open the Security Hub console for the first time, choose Go to
Security Hub.
3. The Security standards section on the welcome page lists supported security
standards. Check the box for a standard to enable it.
4. Choose Enable Security Hub.
References:
1. https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-get-
started.html
2. https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-
enable.html#securityhub-enable-api
3. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/securityhub
/enable-security-hub.html
Page 213
Controls
Control IG 1 IG 2 IG 3
Version
5 Networking
This section contains recommendations for AWS networking configuration.
Page 214
This section contains recommendations for configuring AWS Elastic Compute Cloud
(EC2)
Page 215
• Level 1
Description:
Elastic Compute Cloud (EC2) supports encryption at rest when using the Elastic Block
Store (EBS) service. While disabled by default, forcing encryption at EBS volume
creation is supported.
Rationale:
Encrypting data at rest reduces the likelihood of unintentional exposure and can nullify
the impact of disclosure if the encryption remains unbroken.
Impact:
Losing access to or removing the KMS key used by the EBS volumes will result in the
inability to access the volumes.
Audit:
From Console:
1. Login to the AWS Management Console and open the Amazon EC2 console
using https://console.aws.amazon.com/ec2/.
2. Under Account attributes, click EBS encryption.
3. Verify Always encrypt new EBS volumes displays Enabled.
4. Repeat for each region in use.
Page 216
1. Login to the AWS Management Console and open the Amazon EC2 console
using https://console.aws.amazon.com/ec2/.
2. Under Account attributes, click EBS encryption.
3. Click Manage.
4. Check the Enable box.
5. Click Update EBS encryption.
6. Repeat for each region in which EBS volume encryption is not enabled by
default.
1. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSEncryption.html
2. https://aws.amazon.com/blogs/aws/new-opt-in-to-default-encryption-for-new-ebs-
volumes/
Additional Information:
Default EBS volume encryption only applies to newly created EBS volumes; existing
EBS volumes are not converted automatically.
Page 217
Controls
Control IG 1 IG 2 IG 3
Version
Page 218
• Level 1
Description:
Common Internet File System (CIFS) is a network file-sharing protocol that allows
systems to share files over a network. However, unrestricted CIFS access can expose
your data to unauthorized users, leading to potential security risks. It is important to
restrict CIFS access to only trusted networks and users to prevent unauthorized access
and data breaches.
Rationale:
Allowing unrestricted CIFS access can lead to significant security vulnerabilities, as it
may allow unauthorized users to access sensitive files and data. By restricting CIFS
access to known and trusted networks, you can minimize the risk of unauthorized
access and protect sensitive data from exposure to potential attackers. Implementing
proper network access controls and permissions is essential for maintaining the security
and integrity of your file-sharing systems.
Impact:
Restricting CIFS access may require additional configuration and management effort.
However, the benefits of enhanced security and reduced risk of unauthorized access to
sensitive data far outweigh the potential challenges.
Audit:
From Console:
Page 219
1. Run the following command to list all security groups and identify those
associated with CIFS:
2. Check for any inbound rules that allow unrestricted access on port 445 using the
following command:
Remediation:
From Console:
1. Run the following command to remove or modify the unrestricted rule for CIFS
access:
Page 220
3. Repeat the remediation for other security groups and regions as necessary.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 221
• Level 1
Description:
The Network Access Control List (NACL) function provides stateless filtering of ingress
and egress network traffic to AWS resources. It is recommended that no NACL allows
unrestricted ingress access to remote server administration ports, such as SSH on port
22 and RDP on port 3389, using either the TCP (6), UDP (17), or ALL (-1) protocols.
Rationale:
Public access to remote server administration ports, such as 22 (when used for SSH,
not SFTP) and 3389, increases the attack surface of resources and unnecessarily
raises the risk of resource compromise.
Audit:
From Console:
Perform the following steps to determine if the account is configured as prescribed:
Note: A port value of ALL or a port range such as 0-3389 includes port 22, 3389, and
potentially other remote server administration ports.
Remediation:
From Console:
Perform the following steps to remediate a network ACL:
Page 222
References:
1. https://docs.aws.amazon.com/vpc/latest/userguide/vpc-network-acls.html
2. https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Security.html#VPC_Sec
urity_Comparison
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 223
• Level 1
Description:
Security groups provide stateful filtering of ingress and egress network traffic to AWS
resources. It is recommended that no security group allows unrestricted ingress access
to remote server administration ports, such as SSH on port 22 and RDP on port 3389,
using either the TCP (6), UDP (17), or ALL (-1) protocols.
Rationale:
Public access to remote server administration ports, such as 22 (when used for SSH,
not SFTP) and 3389, increases the attack surface of resources and unnecessarily
raises the risk of resource compromise.
Impact:
When updating an existing environment, ensure that administrators have access to
remote server administration ports through another mechanism before removing access
by deleting the 0.0.0.0/0 inbound rule.
Audit:
Perform the following to determine if the account is configured as prescribed:
Note: A port value of ALL or a port range such as 0-3389 includes port 22, 3389, and
potentially other remote server administration ports.
Remediation:
Perform the following to implement the prescribed state:
Page 224
References:
1. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-
groups.html#deleting-security-group-rule
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 225
• Level 1
Description:
Security groups provide stateful filtering of ingress and egress network traffic to AWS
resources. It is recommended that no security group allows unrestricted ingress access
to remote server administration ports, such as SSH on port 22 and RDP on port 3389.
Rationale:
Public access to remote server administration ports, such as 22 (when used for SSH,
not SFTP) and 3389, increases attack surface of resources and unnecessarily raises
the risk of resource compromise.
Impact:
When updating an existing environment, ensure that administrators have access to
remote server administration ports through another mechanism before removing access
by deleting the ::/0 inbound rule.
Audit:
Perform the following to determine if the account is configured as prescribed:
Note: A port value of ALL or a port range such as 0-3389 includes port 22, 3389, and
potentially other remote server administration ports.
Remediation:
Perform the following to implement the prescribed state:
Page 226
References:
1. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-
groups.html#deleting-security-group-rule
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 227
• Level 2
Description:
A VPC comes with a default security group whose initial settings deny all inbound traffic,
allow all outbound traffic, and allow all traffic between instances assigned to the security
group. If a security group is not specified when an instance is launched, it is
automatically assigned to this default security group. Security groups provide stateful
filtering of ingress/egress network traffic to AWS resources. It is recommended that the
default security group restrict all traffic, both inbound and outbound.
The default VPC in every region should have its default security group updated to
comply with the following:
• No inbound rules.
• No outbound rules.
Any newly created VPCs will automatically contain a default security group that will
need remediation to comply with this recommendation.
Note: When implementing this recommendation, VPC flow logging is invaluable in
determining the least privilege port access required by systems to work properly, as it
can log all packet acceptances and rejections occurring under the current security
groups. This dramatically reduces the primary barrier to least privilege engineering by
discovering the minimum ports required by systems in the environment. Even if the VPC
flow logging recommendation in this benchmark is not adopted as a permanent security
measure, it should be used during any period of discovery and engineering for least
privileged security groups.
Rationale:
Configuring all VPC default security groups to restrict all traffic will encourage the
development of least privilege security groups and promote the mindful placement of
AWS resources into security groups, which will, in turn, reduce the exposure of those
resources.
Page 228
Audit:
Perform the following to determine if the account is configured as prescribed:
Security Group State
Remediation:
Perform the following to implement the prescribed state:
Security Group Members
1. Identify AWS resources that exist within the default security group.
2. Create a set of least-privilege security groups for those resources.
Page 229
Recommended
IAM groups allow you to edit the "name" field. After remediating default group rules for
all VPCs in all regions, edit this field to add text similar to "DO NOT USE. DO NOT ADD
RULES."
References:
1. CCE-79201-0
2. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-
security.html
3. https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-security-
groups.html#default-security-group
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 230
Page 231
• Level 2
Description:
Once a VPC peering connection is established, routing tables must be updated to
enable any connections between the peered VPCs. These routes can be as specific as
desired, even allowing for the peering of a VPC to only a single host on the other side of
the connection.
Rationale:
Being highly selective in peering routing tables is a very effective way to minimize the
impact of a breach, as resources outside of these routes are inaccessible to the peered
VPC.
Audit:
Review the routing tables of peered VPCs to determine whether they route all subnets
of each VPC and whether this is necessary to accomplish the intended purposes of
peering the VPCs.
From Command Line:
1. List all the route tables from a VPC and check if the "GatewayId" is pointing to a
<peering-connection-id> (e.g., pcx-1a2b3c4d) and if the
"DestinationCidrBlock" is as specific as desired:
Remediation:
Remove and add route table entries to ensure that the least number of subnets or hosts
required to accomplish the purpose of peering are routable.
From Command Line:
1. For each <route-table-id> that contains routes that are non-compliant with
your routing policy (granting more access than desired), delete the non-compliant
route:
Page 232
References:
1. https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/peering-
configurations-partial-access.html
2. https://awscli.amazonaws.com/v2/documentation/api/latest/reference/ec2/create-
vpc-peering-connection.html
Additional Information:
If an organization has an AWS Transit Gateway implemented in its VPC architecture, it
should look to apply the recommendation above for a "least access" routing architecture
at the AWS Transit Gateway level, in combination with what must be implemented at
the standard VPC route table. More specifically, to route traffic between two or more
VPCs via a Transit Gateway, VPCs must have an attachment to a Transit Gateway
route table as well as a route. Therefore, to avoid routing traffic between VPCs, an
attachment to the Transit Gateway route table should only be added where there is an
intention to route traffic between the VPCs. As Transit Gateways are capable of hosting
multiple route tables, it is possible to group VPCs by attaching them to a common route
table.
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 233
• Level 1
Description:
When enabling the Metadata Service on AWS EC2 instances, users have the option of
using either Instance Metadata Service Version 1 (IMDSv1; a request/response
method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method).
Rationale:
Instance metadata is data about your instance that you can use to configure or manage
the running instance. Instance metadata is divided into categories, such as host name,
events, and security groups.
When enabling the Metadata Service on AWS EC2 instances, users have the option of
using either Instance Metadata Service Version 1 (IMDSv1; a request/response
method) or Instance Metadata Service Version 2 (IMDSv2; a session-oriented method).
With IMDSv2, every request is now protected by session authentication. A session
begins and ends a series of requests that software running on an EC2 instance uses to
access the locally stored EC2 instance metadata and credentials.
Allowing Version 1 of the service may open EC2 instances to Server-Side Request
Forgery (SSRF) attacks, so Amazon recommends utilizing Version 2 for better instance
security.
Audit:
From Console:
1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at
https://console.aws.amazon.com/ec2/.
2. In the left navigation panel, under the INSTANCES section, choose Instances.
3. Select the EC2 instance that you want to examine.
4. Check the IMDSv2 status, and ensure that it is set to Required.
1. Run the describe-instances command using appropriate filters to list the IDs
of all existing EC2 instances currently available in the selected region:
Page 234
2. The command output should return a table with the requested instance IDs.
3. Run the describe-instances command using the instance ID returned in the
previous step and apply custom filtering to determine whether the selected
instance is using IMDSv2:
4. Ensure that for all EC2 instances, HttpTokens is set to required and State is
set to applied.
5. Repeat steps 3 and 4 to verify the other EC2 instances provisioned within the
current region.
6. Repeat steps 1–5 to perform the audit process for other AWS regions.
Remediation:
From Console:
1. Sign in to the AWS Management Console and navigate to the EC2 dashboard at
https://console.aws.amazon.com/ec2/.
2. In the left navigation panel, under the INSTANCES section, choose Instances.
3. Select the EC2 instance that you want to examine.
4. Choose Actions > Instance Settings > Modify instance metadata
options.
5. Set Instance metadata service to Enable.
6. Set IMDSv2 to Required.
7. Repeat steps 1-6 to perform the remediation process for other EC2 instances in
all applicable AWS region(s).
2. The command output should return a table with the requested instance IDs.
3. Run the modify-instance-metadata-options command with an instance ID
obtained from the previous step to update the Instance Metadata Version:
Page 235
4. Repeat steps 1-3 to perform the remediation process for other EC2 instances in
the same AWS region.
5. Change the region by updating --region and repeat the process for other
regions.
References:
1. https://aws.amazon.com/blogs/security/defense-in-depth-open-firewalls-reverse-
proxies-ssrf-vulnerabilities-ec2-instance-metadata-service/
2. https://docs.aws.amazon.com/cli/latest/reference/ec2/describe-instances.html
CIS Controls:
Controls
Control IG 1 IG 2 IG 3
Version
Page 236
Yes No
1.11 Do not create access keys during initial setup for IAM
users with a console password (Manual)
Page 237
Yes No
1.13 Ensure there is only one active access key for any single
IAM user (Automated)
1.18 Ensure IAM instance roles are used for AWS resource
access from instances (Automated)
2 Storage
Page 238
Yes No
3 Logging
Page 239
Yes No
4 Monitoring
Page 240
Yes No
5 Networking
5.6 Ensure routing tables for VPC peering are "least access"
(Manual)
Page 241
Page 242
Page 243
Page 244
Page 245
Page 246
Page 247
Page 248
Page 249
Page 250
Page 251
Page 252
Page 253
Page 254
Page 255
Page 256
Page 257
Page 258
Page 259
Page 261
Page 262
Page 263