0% found this document useful (0 votes)
2 views

Module - 2 Storage Assignment

The document outlines a step-by-step assignment to create and manage an S3 bucket, including uploading, downloading, copying, and deleting objects. It details the necessary queries and configurations needed for each action, as well as permissions required for user policies and bucket policies. Additionally, it provides instructions for using the AWS S3 CLI to execute commands related to S3 operations.

Uploaded by

psaran709
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Module - 2 Storage Assignment

The document outlines a step-by-step assignment to create and manage an S3 bucket, including uploading, downloading, copying, and deleting objects. It details the necessary queries and configurations needed for each action, as well as permissions required for user policies and bucket policies. Additionally, it provides instructions for using the AWS S3 CLI to execute commands related to S3 operations.

Uploaded by

psaran709
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

Assignment 1

 Create your first S3 bucket. Do the following:

1) Upload an object to your bucket.


2) Download an object
3) Copy your object to a folder
4) Delete your objects and bucket

1.Upload
 Dropdown: For selecting a bucket in S3 storage.
 Table: For listing all the objects inside the selected bucket in dropdown.
 Text Input: For getting a path for the file that is to be uploaded.
 File picker: For uploading the file.
 Button: This will be used to fire the upload query.
Queries
We'll create the following queries:
1. getBuckets
2. listObjects
3. uploadToS3
4. download
getBuckets
This query will fetch the list of all the buckets in your S3. Just create a new query, select AWS
S3 data source, and choose List buckets operation. Name the query getBuckets and
click Save.

Now, let's edit the properties of dropdown widget.


 Label: Set the label as Bucket.
 Option values: Set option values as {{queries.getBuckets.data.Buckets.map(bucket =>
bucket['Name'])}}. We're mapping the data returned by the query as the returned
data is array of objects.
 Option label: Set option values as {{queries.getBuckets.data.Buckets.map(bucket =>
bucket['Name'])}}. This will display the same option label as option values.
You can later add an event handler for running the listObject query whenever an
option is selected from the dropdown.
listObjects
This query will list all the objects inside the selected Bucket in dropdown. Select List objects
in a bucket operation, enter {{components.dropdown1.value}} in the Bucket field - this will
dynamically get the field value from the selected option in dropdown.

Edit the properties of table widget:


 Table data: {{queries.listObjects.data['Contents']}}
 Add Columns:
o Key: Set the Column Name to Key and Key to Key
o Last Modified: Set the Column Name to Last
Modified and Key to LastModified
o Size: Set the Column Name to Size and Key to Size
 Add a Action button: Set button text to Copy signed URL, Add a handler to this
button for On Click event and Action to Copy to clipboard, in the text field
enter {{queries.download.data.url}} - this will get the download url from
the download query that we will create next.

2.Download
Create a new query and select Signed URL for download operation. In the Bucket field,
enter {{components.dropdown1.value}} and in Key
enter {{components.table1.selectedRow.Key}}.
Edit the properties of the table, add a Event handler for running the download query
for Row clicked event. This will generate a signed url for download every time a row is
clicked on the table.

uploadToS3
Create a new query, select the Upload object operation. Enter the following values in their
respective fields:
 Bucket: {{components.dropdown1.value}}
 Key: {{ components.textinput1.value + '/' +components.filepicker1.file[0].name}}
 Content type: {{components.filepicker1.file[0].type}}
 Upload data: {{components.filepicker1.file[0].base64Data}}
 Encoding: base64
Configure the File Picker:
Click on the widget handle to edit the file picker properties:
 Change the Accept file types to {{"application/pdf"}} for the picker to accept only pdf
files or {{"image/*"}} for the picker to accept only image files . In the screenshot
below, we have set the accepted file type property to {{"application/pdf"}} so it will
allow to select only pdf files:

 Change the Max file count to {{1}} as we are only going to upload 1 file at a time.
 Select a pdf file and hold it in the file picker.

Final steps, go to the Advanced tab of the uploadToS3 query and add a query to
run listObjects query so that whenever a file is uploaded the tabled is refreshed.
3.Copy your object to a folder
Create a User Policy
To be able to perform S3 bucket operations we need to give the copy_user some
permissions.

To do so go to the Destination AWS account under the IAM service, then select users and
then select the user that will be used to do the copy/move operations. On the user page
select the Attach User Policy button (see above - Image 1.)
User Policy
In this User Policy we give the user the follow permissions:
 ListAllMyBuckets - the ability to list all available S3 buckets
 GetObject - the ability the get an S3 object
 ListBucket - the ability to list all the objects for the given bucket (in this case the to-
destination bucket)
 GetBucketLocation - the ability to get the location of the bucket which is needed
before any operations can be performed on a bucket.
 We finally gave the user policy the name bucket_copy_move_policy,
Create a Bucket Policy
At this point the user has the right to perform some S3 operations but do not really have
permission to access the objects within the from-source bucket yet. The reason is because
the from-source bucket do not belong to the Destination AWS account but to
the Source AWS Account. So we need to allow the user to get an object in the from-
source bucket, by giving him permission via the from-source bucket policy, beloning to
the Source AWS Account.
To do this we make use of a Bucket Policy. Look at how to add a bucket policy.
Amazon have some good documentation explaining How Amazon S3 Authorizes a Request
for a Bucket Operation and how the permission validation works during a typical S3 object
request, but for now lets get practical. If you just want to copy/move objects then past the
bucket policy below into your bucket's policy.
Bucket Policy
To do this you need to log into the Source AWS account, then go to the S3 service. Inside the
S3 console select the from-source bucket and click on the Properties button and then select
the Permissions section
Click on the Add bucket policy button and past the bucket policy given above. After pasting
the bucket policy click on the Save button as shown in image 3 below.

So here is a quick explanation of the Bucket Policy:


 Effect - Will allow the following actions to the given principal
 Principal - This refers to the user, account, service, or other entity that is allowed
access. In this case the IAM user copy_user . The value for Principal is the Amazon
Resource Name given to the copy_user. If we want to allow everybody you can
replace this line with:
"Principal": "*"
This information can be found in the same place we went to add the user policy in the IAM
Service under Users, select user copy_user, then on the top of the page, under the Summary
heading next to the User ARN: label.
 Action - This refers to the S3 actions that are allowed on this bucket, I gave it "s3:*",
which means all actions. If you just want to give it read access (get objects) you can
replace this line with
"s3:GetObject"
Also remember to add
"s3:DeleteObject"
if you want to do a move.
 Resource - Here I specified the from-source bucket name AND all its content.
At this point all the setup-work so far would have had to be done for all other tools or
solutions since this is the fundamental way AWS is granting permissions to resources in S3.
Verify AWS S3 CLI availability on EC2 instance
This is were we deviate from other solutions and move to the world of AWS S3 CLI.
Now ssh to your EC2 instance (Destination AWS account), e.g.
ssh -l ec2-user -i /path_to/AWS_KEYS.pem ec2-00-00-00-000.compute-1.amazonaws.com
Look at Connecting to Your Linux Instance Using SSH for more details on how to ssh to an
EC2 instance.
Now on the command line enter:
aws s3
You should get back something like the following:
usage: aws [options] command subcommand [parameters]
aws: error: too few arguments
This will confirm that the AWS S3 CLI tool is installled and available. Like I said before you do
not have to install the tool since it already comes with the AWS EC2 Linux instance.
Configure S3 CLI credentials
Now to be able to use the S3 CLI tool we need to configure it first to use the credentials of
the IAM user of the Destination AWS account. The AWS CLI stores the credentials it will use
in the file ~/.aws/credentials. If this is already setup for the copy_user skip to the next step.
It is important to note that this might be already configured but for a different user account.
You have to set the credentials to be that of the user you have setup the User policy for
above in step four. Execute the following command and enter the user credentials, first the
access key and then the secret key. The region must be that of the region of the user
account, if you do not know this just hit enter by excepting the default. You can also except
the default for the last option output format and hit enter:
$ aws configure
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text
Finally we are ready to execute some AWS S3 CLI commands. Lets see if we can show a total
count of the objects in our from-source bucket Execute the following command:
aws s3 ls s3://from-source | wc -l
or if the count is not to high or you do not mind getting a lot of file names scrolling over the
screen you can do
aws s3 ls s3://from-source
Congrats! You have access to the from-source bucket You can also try to copy say one file
down to a local folder on your EC2 instance e.g.::
aws s3 cp s3://from-source/filename.txt" .
Detailed documentation regarding S3 CLI can be found at AWS S3 CLI Documentation
Finally copy and move some S3 objects!
Now to be able to allow our user copy_user to write to our to-destination S3 bucket we need
to give him more user and bucket permissions to allow "writing" to the to-
destination bucket. Again you can refer to How Amazon S3 Authorizes a Request for a Bucket
Operation for a good explaining.
You have seen now that dealing with S3 buckets we have to give the user permission to
perform certain actions and at the same time give the user access to the S3 bucket. So again
we will have to modify the user policy, but we do not have to create a new bucket policy for
the to-destination S3 bucket. The reason is that the to-destination bucket is within the same
AWS account as our IAM user and thus we do not have to give explicit permissions on the
bucket itself. However it would be good to go and check if there are not any bucket policies
on our destination bucket that might conflict with our user policy. If there is, just remove it
temporary until you completed the copy/move of objects. Just make sure that if it is a
production environment you make these changes during a scheduled maintenance window.
Updated User Policy
The only changes in the user policy was the adding of the "s3:PutObject" Action and another
resource for the to-destination bucket.
Now lets see if you can list content in the to-destination bucket by executing the following
command:
aws s3 ls s3://to-destination | wc -l
or, if the count is not to high, or you do not mind getting a lot of file names scrolling over the
screen you can do:
aws s3 ls s3://to-destination
Finally we get to the point where you want to copy or move directly from one bucket to the
other:
aws s3 cp s3://from-source/ s3://to-destination/ --recursive
We use the --recursive flag to indicate that ALL files must be copied recursively.
The following are more useful options that might interest you. Here we copy only pdf files by
excluding all .xml files and including only .pdf files:
aws s3 cp s3://from-source/ s3://to-destination/ --recursive --exclude "*.xml" --include
"*.pdf"
Here we copy all the objects in the from-source bucket to a current local folder on the
current machine:
aws s3 cp s3://from-source . --recursive
Here we copy everything except a folder another:

aws s3 cp s3://from-source/ s3://to-destination/ --recursive --exclude


"from-source/another/*"
If you wanted to include both .jpg files as well as .txt files and nothing else, you can run:
aws s3 cp s3://from-source/ s3://to-destination/ --recursive --exclude "*" --include "*.jpg" --
include "*.txt"
To move an object you can use:
aws s3 mv s3://from-source/ s3://to-destination/ --recursive
or
aws s3 mv s3://from-source/file1.txt s3://to-destination/file2.txt

4.Delete your objects and bucket


Steps
1.Select View buckets from the dashboard, or select STORAGE (S3) > Buckets.
The Buckets page appears and shows all existing S3 buckets.
2.Use the Actions menu or the details page for a specific bucket.

3.When the confirmation dialog box appears, review the details, enter Yes, and select OK.
4.Wait for the delete operation to begin.
After a few minutes:
o A yellow status banner appears on the bucket details page. The progress bar
represents what percentage of objects have been deleted.
o (read-only) appears after the bucket's name on the bucket details page.
o (Deleting objects: read-only) appears next to the bucket's name on the
Buckets page.
5.As required while the operation is running, select Stop deleting objects to halt the process.
Then, optionally, select Delete objects in bucket to resume the process.
When you select Stop deleting objects, the bucket is returned to write mode; however, you
can't access or restore any objects that have been deleted.
6.Wait for the operation to complete.
When the bucket is empty, the status banner is updated, but the bucket remains read only.
7.Do one of the following:
 Exit the page to keep the bucket in read-only mode. For example, you might keep an
empty bucket in read-only mode to reserve the bucket name for future use.
 Delete the bucket. You can select Delete bucket to delete a single bucket or return
the Buckets page and select Actions > Delete buckets to remove more than one
bucket.

 Return the bucket to write mode and optionally reuse it for new objects. You can
select Stop deleting objects for a single bucket or return to the Buckets page and
select Action > Stop deleting objects for more than one bucket.

You might also like