S3 Express
S3 Express
All rights reserved. No parts of this work may be reproduced in any form or by any means - graphic, electronic, or
mechanical, including photocopying, recording, taping, or information storage and retrieval systems - without the written
permission of the publisher.
Products that are referred to in this document may be either trademarks and/or registered trademarks of the respective
owners. The publisher and the author make no claim to these trademarks.
"Amazon Web Services", the "Powered by Amazon Web Services" logo, "AWS", "Amazon S3" and "Amazon Simple
Storage Service" are trademarks of Amazon.com, Inc. or its affiliates in the United States and/or other countries.
While every precaution has been taken in the preparation of this document, the publisher and the author assume no
responsibility for errors or omissions, or for damages resulting from the use of information contained in this document
or from the use of programs and source code that may accompany it. In no event shall the publisher and the author be
liable for any loss of profit or any other commercial damage caused or alleged to have been caused directly or indirectly
by this document.
S3Express Help
Table of Contents
Introduction 1
Overview of All Commands 4
mkbkt (create new bucket) 6
rmbkt (remove bucket) 7
getbktinfo (get bucket information) 8
ls (list objects) 10
cd (change working location) 14
getmeta (show object's metadata) 15
setmeta (set object's metadata) 16
getacl (show object's ACL) 19
setacl (set object's ACL) 20
put (upload files) 24
mkfol (create folder) 32
lsupl (list multipart uploads) 33
rmupl (remove multipart uploads) 34
del (delete objects) 35
copy (copy object) 38
restore (restore objects) 40
Authorization Commands 43
setopt / showopt 45
license (enter license) 49
exec (execute commands from file) 50
Other Commands 52
Filter Condition Syntax (-cond:"FILTER") 55
Command Shortcuts 67
Command Variables 68
Multipart Uploads 70
Scripting via Command Line 71
Exit Codes 73
Redirect Command Output to a File 74
Unicode Support (åäö etc.) 75
Examples 76
FAQ and Knowledge Base 80
How to Buy S3Express 81
How to Enter the License 82
License Agreement 83
II
Introduction 1
1 Introduction
S3Express is a Windows command line utility for Amazon Simple Storage Service S3™.
With S3Express you can access, upload and manage your files on Amazon S3™ using the Windows
command line.
S3Express is ideal for scripts, automated incremental backups / uploads and for performing custom queries
on Amazon S3™ objects.
S3Express is a very compact program with a very small footprint (the entire program is less than 5MB). It's
self-contained in one executable 'S3Express.exe' and it does not require any additional libraries or
software to be installed. Simply download and install S3Express and you are ready to go.
Connections to Amazon S3™ are made using secure http (https), which is an encrypted version of HTTP, to
protect your files while they are in transit to and from Amazon S3.
S3Express works on all versions of Windows including all Windows Servers.
S3Express is sold, actively supported and maintained by TGRMN Software.
Main Features
General
All S3Express operations are multithreaded to achieve maximum speed.
All S3Express operations are automatically retried on connection error (after X seconds and for X
times, customizable) to be able to work on less reliable connections.
All S3Express operations are interruptible and then restartable at any time simply by pressing the
key 'ESC'.
All S3Express connections to Amazon S3™ are made using secure http (https) to protect your files
while they are in transit to and from Amazon S3™ servers.
Multiple command line variables are supported.
Scripting via the command line is supported.
Unicode compatible, i.e. S3Express supports all alphabet characters in the world.
List Objects
List objects in one or more S3 buckets and optionally show metadata and ACL for each object.
List objects only in a specified subfolder or recursively list all objects in all subfolders.
Include / exclude objects from the listing based on name, size, metadata, ACL, storage-class,
encryption status, etc.
Filter listing using regular expressions or basic wildcards.
Show listing summary only and group S3 objects by extension, date, subfolder and more.
Optionally include all versions of an object in the listing.
For example, list all objects with 'cache-control' header not set or 'cache-control' header not equal to a
specified value.
For example, list only all public objects or only all private objects or only objects with a specified ACL.
For example, list all object whose size is larger than a specified value.
Upload Files
Upload multiple files and whole directories to Amazon S3.
Uploads are fully restartable in case of failure.
Uploads are automatically retried in case of an error (after X seconds and for X times, customizable).
Optimized parallel file transfers (multiple threads) to speed-up uploads.
Upload files using multipart uploads (with correct MD5 value) and multiple threads, so that large
uploads can be restarted at any time from where they were left, if interrupted.
Server-side and/or client-side file encryption supported.
Keep the existing ACLs and/or metadata when overwriting existing S3 files.
Throttle maximum bandwidth to use in Kilobytes per sec.
Select files to upload based on name, extension, size, subfolder, time, etc.
Copy objects instead of re-uploading, if a matching object is found already on S3, so that renaming an
object does not require re-uploading.
Move local files to Amazon S3 after successful upload, based on flexible criteria, e.g. file age, name,
size etc.
Incremental Backup to S3
Upload only new and changed files. Using this type of upload you can perform fast, incremental
backups: only those files that are new or have changed, compared to the files that are already on S3,
are uploaded. If a file is renamed locally, then the corresponding S3 file is copied, not re-uploaded, to
save time and bandwidth. Optionally, if a file is deleted locally, the corresponding S3 file can be
removed or archived.
Manage Metadata
Show all or just specific object metadata.
Set, reset, replace one or multiple objects' metadata.
Preview all operations before proceeding.
For example, set multiple objects' 'cache-control' header to a certain value.
For example, apply server-side encryption to existing S3 objects.
Manage ACLs
Show all or specific object ACLs
Set, reset, replace one or multiple objects' ACLs.
Preview all operations before proceeding.
For example, set multiple objects to public access.
For example, set objects whose name starts with 'R' to private access.
For example, make sure all objects in a folder or bucket are set to private access.
Delete S3 Files
Delete one or multiple files from Amazon S3.
Filter files to delete based on name, extension, size, ACL, metadata, time.
Copy S3 Files
Copy Amazon S3 objects.
Keep metadata and ACLs of copied objects.
S3Express Commands
Notes:
- For all commands: if objects or path names contain spaces, they must be surrounded with double
quotation marks ("), e.g. "c:\folder\file name with spaces.txt"
- All commands support command variables
Command Description
Buckets
mkbkt Create (make) a new bucket.
rmbkt Remove a bucket. The bucket must be completely empty without objects or the
command will fail.
getbktinfo Show (get) bucket information.
List Objects
ls List objects (i.e. files and folders) in a bucket. Optionally show object's metadata
and ACLs.
cd Change the current S3 working location.
Metadata
getmeta Show (get) the metadata associated with an object. Note: use the list command '
ls' to show metadata associated with multiple objects.
setmeta Set the metadata associated with one or more objects.
ACL
getacl Show (get) the access control list (ACL) permissions of an object. Note: use the
list command ls to show the ACL permissions of multiple objects.
setacl Set the access control list (ACL) permissions of one or more objects.
Put (Upload)
put Upload files to a S3 bucket.
mkfol Creates a folder at current working location.
lsupl List in-progress multipart uploads.
rmupl Remove in-progress multipart uploads.
Delete
del Delete one or more objects from a bucket.
Copy
copy Create a copy of an object that is already stored on Amazon S3.
Restore
restore Restore objects from Glacier to S3.
Authorization
saveauth Save Access Key ID and Secret Access Key in S3Express.
loadauth Load a previously saved Access Key ID and Secret Access Key in S3Express for
use.
showauth Show Access Key ID and Secret Access Key as stored in S3Express.
rmauth Remove Access Key ID and Secret Access Key from S3Express.
Options
setopt Set S3Express options.
showopt Show S3Express options.
License
license Enter a license in S3Express. Entering a license unlocks the S3Express trial.
Exec
exec Load and execute a list of commands from a text file.
Shortcuts
c1, c2, ..., c9 Execute memorized command (shortcut).
Other
checkupdates Check for program updates.
md5 Calculate and show MD5 value of a file.
mimetype Show default mime type used by S3Express for a specific file extension.
OnErrorSkip Error handling commands. Useful when processing multiple commands with exec
ResetErrorStatus or via command line.
ShowErrorStatus
pause Pause for the specified amount of seconds when processing multiple commands.
pwd Show current local working directory.
Help
help or h Show inline help.
htmlhelp Show help in HTML format.
pdfhelp Show help in PDF format.
Exit
q, quit or exit Exit S3Express.
mkbkt BUCKET_NAME
Create (make) a new bucket.
Restrictions:
- Spaces in bucket names are not allowed.
- Bucket names with upper case letters are not supported.
rmbkt BUCKET_NAME
Remove (delete) an existing bucket.
Notes:
The bucket to be removed must be completely empty or the command will fail. Remove all objects from a
bucket with the del command before removing a bucket.
Note:
It is possible to combine two or more of the parameters above to show multiple items at once, e.g. :
getbktinfo mybucketname -acl -policy -versioning
6 ls (list objects)
[BUCKET_NA Name of the bucket, folder, ls (list all objects in current path. If the current path is
ME]/ object(s) to list. the root it will list all buckets)
[FOLDER]/ If not specified, objects in current ls mybucket (list all objects in mybucket)
[OBJECT] location are listed. ls mybucket/myfolder (list all objects in mybucket/
Change the current location with myfolder)
command cd. ls ../myfolder (list all objects in ../myfolder)
To specify the parent folder, use ls mybucket/*.txt (list all objects with extension txt in
'..' mybucket, in the root folder)
Wildcard character can be used ls mybucket/myfolder/*.txt (list all objects with extension
(i.e. * and ?) like in Windows dir. txt in mybucket/myfolder/)
If object name have spaces, they ls "mybucket/my folder/*.txt" (list all objects with
must be surrounded by extension txt in mybucket/my folder/, using quotation
quotation marks (") on the marks)
command line.
-s Recursive listing, e.g. include all ls -s (list all objects in current path and subfolders. If the
subfolders. current path is the root it will list all buckets and every
object in them)
ls mybucket -s (list all objects in mybucket and in all
subfolders)
ls mybucket/myfolder/*.txt (list all objects with extension
txt in mybucket/myfolder/ and subfolders)
-d Include subfolders (=directories) ls mybucket -s -d (list all objects and directories in
in the listing. mybucket and in all subfolders)
-od Only include subfolders ls mybucket -od (list all folders that are in mybucket)
(=directories) in the listing, do ls mybucket -s -od (list all folders that are in mybucket
not list other objects. and in all subfolders)
-md5 Include the object's MD5 value in ls mybucket -md5
the listing.
-r Regular expression. This flag ls "mybucket/my folder/.*\.txt|.*\.vsn" (list all objects
specifies that [BUCKET_NAME]/ with extension txt or vsn in mybucket/my folder/)
[FOLDER]/[OBJECT] must be ls mybucket/^r.* (list all objects starting with 'r' in
treated as a regular expression. mybucket)
-bytes Show the object's size in bytes, ls mybucket -s -bytes
instead of KB or MB.
-ext Extended listing. Metadata and ls mybucket -ext
ACLs are shown beneath other
object's information.
-sep:SEP Use SEP as fields' separator. If ls mybucket -sep:, (use comma as fields' separator)
not specified, the default ls mybucket -sep:"* *" (use '* *' as fields' separator)
separator used is a blank space.
-showmeta: Include specified object metadata ls mybucket -showmeta:* (include ALL the metadata
"META" in the listing output. Wildcard headers in the output)
character can be used (i.e. * ls mybucket -showmeta:"cache-control" (include the
and ?). Multiple metadata cache-control header in the output for objects that have
headers should be separated by it)
|. If this flag is not specified, by ls mybucket -showmeta:"cache-control|x-amz-server-
default, no metadata is shown. side-encryption" (include the cache-control and x-amz-
server-side-encryption metadata headers in the output
Note that showing metadata in for objects that have it)
the listing output is much slower ls mybucket -showmeta:"x-amz-meta-*" (include all the
as each object must be queried metadata headers that start with x-amz-meta- in the
separately. output for objects that have it)
ls mybucket -showmeta:* -ext (include ALL the metadata
headers in the output in extended format, metadata
shown beneath other object's information)
-showacl: Include specified ACL permissions ls mybucket -showacl:* (include all ACL object
"ACL" in the listing output. Wildcard permissions in the output)
character can be used (i.e. * ls mybucket -showacl:allusers (include AllUsers ACL
and ?). Multiple ACLs should be object permissions in the output)
separated by |. If this flag is not ls mybucket -showacl:user (include user ACL object
specified, by default, no ACL is permissions in the output)
shown. ls mybucket -showacl:user|allusers (include user and
alluser ACL object permissions in the output)
Note that showing ACLs in the
listing output is much slower as
each object must be queried
separately.
-maxkeys:X Request only X objects per HTTP ls mybucket -maxkeys:10
request.
-cond: Filter condition. Only include ls mybucket -s -cond:"extract_value(cache-control,'max-
"FILTER" objects in the output matching age') > 0" -meta:cache-control (list all objects (recursive)
the specified condition. More info with cache-control:max-age value > 0 in the metadata
on filter condition syntax and and include the cache-control header in the output)
variables. ls mybucket -s -cond:"size = 0" (list all objects (recursive)
of size equal to zero)
ls mybucket -s -cond:"name starts_with 'a'" (list all
objects (recursive) with name starting with a)
ls mybucket -s -cond:"name starts_with 'a' and size >
0" (list all objects (recursive) with name starting with a
and size > 0)
More info on filter condition syntax and variables
-sum Show summary only, e.g. total ls mybucket -s -sum (show summary of all objects in
amount of objects and total size, mybucket and in all subfolders)
do not list each object
separately.
-grp:GROUP Group objects by GROUP in the ls mybucket -s -grp:subf (list all objects in mybucket and
output. subfolders and group output by subfolder)
ls mybucket -s -grp:ext (list all objects in mybucket and
GROUP can be one of the subfolders and group output by object's extension)
following values: ls mybucket -s -grp:ymd (list all objects in mybucket and
ym (i.e. -grp:ym) Group objects subfolders and group output by year, month and day)
by year and month. ls mybucket -s -grp:ymd -sum (show summary only of all
ymd (i.e. -grp:ymd) Group objects objects in mybucket and subfolders and group summary
Notes:
Use quotation marks (") if folder or object names contain blank spaces, e.g. ls "mybucket/my folder/name
with a space.txt"
cd [BUCKET_NAME]/[FOLDER]/
Change current S3 working location.
[BUCKET_NAM Name of the bucket and folder cd mybucket (working location set to bucket mybucket)
E]/[FOLDER]/ (optional), that will be set as
the new working location. cd .. (working location set to parent folder)
[BUCKET_NAM Name of the bucket, folder and getmeta file.txt (shows metadata of 'file.txt' at current S3
E]/[FOLDER]/ object to show metadata for. working location, see cd command for setting working
OBJECT location)
Notes:
To show the metadata associated with multiple objects, use the ls command.
-e:- removes the object S3 setmeta mybucket/subfolder/file -e:- (remove header 'x-
server side encryption header 'x- amz-server-side-encryption=AES256' from mybucket/
amz-server-side-encryption: subfolder/file, that will decrypt the file)
AES256'.
-rr:+/- -rr:+ sets the object S3 storage setmeta mybucket/subfolder/file -rr:+ (set header 'x-
class to Reduced Redundancy: 'x- amz-storage-class=REDUCED_REDUNDANCY' to
amz-storage-class: mybucket/subfolder/file)
REDUCED_REDUNDANCY'.
setmeta mybucket/subfolder/file -rr:- (remove header 'x-
-rr:- removes the object S3 amz-storage-class=REDUCED_REDUNDANCY' from
storage class Reduced mybucket/subfolder/file)
Redundancy: 'x-amz-storage-
class:REDUCED_REDUNDANCY'.
-ia:+/- -ia:+ sets the object S3 storage setmeta mybucket/subfolder/file -ia:+ (set header 'x-
class to Infrequent Access: 'x- amz-storage-class=STANDARD_IA' to mybucket/
amz-storage-class: subfolder/file)
STANDARD_IA'.
setmeta mybucket/subfolder/file -ia:- (remove header 'x-
-ia:- removes the object S3 amz-storage-class=STANDARD_IA' from mybucket/
storage class Infrequent Access: subfolder/file)
'x-amz-storage-class:
STANDARD_IA'.
-sim Simulation. Only preview how the setmeta mybucket/* -meta:"cache-control:max-age=60"
metadata would be set and do -sim (list which files would get the header cache-control:
not actually set the metadata max-age=60 applied to)
headers yet.
-cond: Filter condition. Only apply the setmeta mybucket/* -meta:"cache-control:max-age=60"
"FILTER" metatdata to objects matching -cond:"size_mb > 5" (set cache-control:max-age=60 to
the specified condition. More info all files in mybucket that are larger than 5 MB)
on filter condition syntax and
variables.
-include:INCL Only apply the metatdata to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
objects matching the specified test:yes" - include:"*.exe|*.rpt"
mask (Wildcards). Separate
multiple masks with "|".
-exclude: Do not apply the metatdata to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
EXCL objects matching the specified test:yes" - exclude:"*.exe|*.rpt"
mask (Wildcards). Separate
multiple masks with "|".
-rinclude: Only apply the metatdata to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
INCL objects matching the specified test:yes" - rinclude:"IMGP[0-9]{4}.jpg"
mask (Regular Expression).
-rexclude: Do not apply the metatdata to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
EXCL objects matching the specified test:yes" - rexclude:"IMGP[0-9]{4}.jpg"
mask (Regular Expression).
-inclenc Apply the metatdata only to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
-exclenc server-side encrypted files. test:yes" -inclenc
Do not apply the metatdata to
server-side encrypted files.
-inclrr Apply the metatdata only to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
-exclrr reduced redundancy files. test:yes" -inclrr
Do not apply the metatdata to
reduced redundancy files.
-inclia Apply the metatdata only to setmeta mybucket/subfolder/file -meta:"x-amz-meta-
-exclia infrequent access files. test:yes" -inclia
Do not apply the metatdata to
infrequent access files.
[BUCKET_NAM Name of the bucket, folder and getacl file.txt (shows ACL of 'file.txt' at current S3
E]/[FOLDER]/ object to show ACL for. working location, see cd command for setting working
OBJECT location)
-version:ID Show the ACL associated with a getacl object.txt -version:23444411 (show ACL of object.
specific version of the object. txt, object version ID 23444411)
Notes:
To show the ACL of multiple objects, use the ls command.
[BUCKET_NA Name / path of the object(s) to setacl mybucket/file -cacl:private (set canned ACL
ME]/ set the ACL for. Wildcard 'private' to mybucket/file)
[FOLDER]/ characters are supported by
OBJECT default (* and ?) to match setacl mybucket/* -cacl:public-read (set canned ACL
multiple objects. A regular 'public-read' to all files in mybucket)
expression can be used too, in
that case use the flag -r on the setacl mybucket/*.txt -s -cacl:public-read-write (set
command line, see below. canned ACL 'public-read-write' to all txt files in
mybucket, including in subfolders of mybucket)
-s Recursive, e.g. include all setacl mybucket/*.txt -s -cacl:public-read-write (set
subfolders when processing canned ACL 'public-read-write' to all txt files in
multiple objects with wildcard mybucket, including in subfolders of mybucket)
characters or regular expression.
-r Regular expression. This flag cd mybucket (set working location to mybucket)
specifies that [BUCKET_NAME]/
[FOLDER]/[FILE] is a regular followed by
expression.
setacl ^(a.*)|(b.*)|(c.*) -s -cacl:public-read (set canned
ACL 'public-read' to all files starting with a, b or c in
mybucket, including files in subfolders of mybucket)
-cacl: Set canned ACL. Amazon S3 setacl mybucket/*.jpg -s -cacl:private (set canned ACL
CANNED_ACL supports a set of predefined 'private' to all jpg files in mybucket, including in
ACLs, known as canned ACLs. subfolders of mybucket)
Each canned ACL has a
predefined set of grantees and
permissions.
objects matching the specified canned ACL 'private' to all files in mybucket and
mask (Wildcards). Separate subfolders that have extension .jpg or .gif)
multiple masks with "|".
-exclude: Do not apply the permissions to setacl mybucket -s -cacl:private -exclude:*.jpg|*.gif|*.
EXCL objects matching the specified png (set canned ACL 'private' to all files in mybucket and
mask (Wildcards). Separate subfolders, excluding files that have extension .jpg or .
multiple masks with "|". gif or *.png)
-rinclude: Only apply the permissions to setacl mybucket -s -cacl:private -rinclude:a(x|y|z)b (set
INCL objects matching the specified canned ACL 'private' to all files in mybucket and
mask (Regular Expression). subfolders whose name is matching axb, ayb and azb)
-rexclude: Do not apply the permissions to setacl mybucket -s -cacl:private -rexclude:a(x|y|z)b (set
EXCL objects matching the specified canned ACL 'private' to all files in mybucket and
mask (Regular Expression). subfolders, excluding files whose name is matching axb,
ayb and azb)
-inclenc Apply the permissions only to setacl mybucket -s -cacl:private -inclenc (set canned ACL
-exclenc server-side encrypted files. 'private' to all files in mybucket and subfolders that are
Do not apply the permissions to server-side encrypted)
server-side encrypted files.
-inclrr Apply the permissions only to setacl mybucket -s -cacl:private -inclrr (set canned ACL
-exclrr reduced redundancy files. 'private' to all files in mybucket and subfolders that have
Do not apply the permissions to storage class 'reduced redundancy')
reduced redundancy files.
-inclia Apply the permissions only to setacl mybucket -s -cacl:private -inclia (set canned ACL
-exclia infrequent access files. 'private' to all files in mybucket and subfolders that have
Do not apply the permissions to storage class 'infrequent access')
infrequent access files.
-inclgl Apply the permissions only to setacl mybucket -s -cacl:private -inclgl (set canned ACL
-exclgl Glacier files. 'private' to all files in mybucket and subfolders that have
Do not apply the permissions to storage class 'Glacier')
Glacier files.
-inclle Apply the permissions only to setacl mybucket -s -cacl:private -inclle (set canned ACL
-exclle client-side (locally) encrypted 'private' to all files in mybucket and subfolders that are
files. client-side encrypted)
Do not apply the permissions to
client-side (locally) encrypted
files.
For example, the following -grant-read grants read object data and its metadata permission to the AWS
accounts identified by their email addresses:
-grant-read:"[email protected], [email protected]"
LOCAL_FILES Name / path of the local file(s) to put c:\folder\ mybucket (upload all files in c:\folder\ to
upload. Wildcard characters are mybucket)
supported by default (* and ?) to put c:\folder\file.txt mybucket (upload file c:\folder\file.
match multiple objects. A regular txt to mybucket)
expression can be used too, in put c:\folder\*.txt mybucket (upload files *.txt in c:
that case use the flag -r on the \folder\ to mybucket)
command line, see below.
[BUCKET_NA Name of S3 bucket, folder put c:\folder\file.txt mybucket/subfolder/ (upload file c:
ME]/ (optional) and object (optional) \folder\file.txt to mybucket/subfolder)
[FOLDER]/ to upload files to. This is relative put c:\folder\*.txt mybucket/subfolder/ (upload files *.txt
[OBJECT] to the current S3 working in c:\folder\ to mybucket/subfolder)
location.
-s Recursive, upload local files that put c:\folder\ mybucket -s (upload all files in c:\folder\
are in subfolders too. The and subfolders of c:\folder\ to mybucket. The subfolder
subfolder structure is replicated structure is replicated in mybucket)
while uploading.
-t:THREADS Specify the number of put c:\folder\ mybucket -s -t:4 (upload all files in c:
concurrent, parallel threads used \folder\ and subfolders of c:\folder\ to mybucket using 4
to upload files to S3. By default parallel threads)
only 1 thread is used.
-mul: Use Amazon S3 multipart put c:\folder\ mybucket -s -t:4 -mul (upload all files in c:
PARTSIZE uploads to upload the files. \folder\ and subfolders of c:\folder\ to mybucket using 4
parallel threads and multipart uploads)
The PARTSIZE value is optional put c:\folder\ mybucket -s -t:4 -mul:50 (upload all files in
and can be used to specify the c:\folder\ and subfolders of c:\folder\ to mybucket using
size of each upload part to use, 4 parallel threads and multipart uploads. Use an upload
in Megabytes. The minimum part size of 50 Megabytes)
upload part size is 5MB and that
is also the default size used if
PARTSIZE is not specified. Max
size is 1000 Megabytes.
To provide an encryption
password, use the command
setopt, with flag -clientencpwd.
To provide an encryption
password hint, use the
command setopt, with flag -
clientencpwdhint.
If a password hint is specified, it
is then added to the metadata of
each encrypted file. The
metadata header containing the
password hint is 'x-amz-meta-
s3xpress-encrypted-pwd-hint'.
- Do not automatically show the put c:\folder\ mybucket -noautostatus (upload files in c:
noautostatus latest upload status every 10 \folder\ to mybucket and do not automatically show the
seconds. The status can be latest upload status every 10 seconds)
shown by pressing the key 's'
while the upload is in process.
-minoutput Minimal output. Minimize the put c:\folder\ mybucket -minoutput -s
output that is shown in the
S3Express console during a put
operation. This option is useful
when copying many small files to
S3, which could make the
S3Express output in the console
too fast to read. Minimal output
can be toggled on or off at any
time during a put operation by
pressing the key 'o'.
-stoponerror Stop operation as soon as an put c:\folder\ mybucket -s -stoponerror
error occurs (do not continue
with other files).
-optimize Enable thread optimization for put c:\folder\*.jpg mybucket -s -t:16 -optimize
transferring large amounts of
relatively small files over fast
connections. Recommended to
use with at least 4 threads (-
t:4).
-accelerate Use Amazon S3 Transfer put c:\folder\*.jpg mybucket -accelerate
Acceleration for this operation.
S3Express will use 's3-accelerate.
amazonaws.com' as the endpoint
for this operation. Transfer
Acceleration must to be firstly
enabled for the bucket in your
account or this option will fail
with an error.
Notes:
- When uploading files to Amazon S3, the Windows modified timestamp is not kept, because Amazon S3
objects get the time of the upload as modified timestamp. This is part of Amazon S3 functionality and it does
not depend on S3Express. In order to keep information re the original file modified timestamp, S3Express
adds two custom metadata headers to each uploaded file: x-amz-meta-s3xpress-modified-time-iso and x-
amz-meta-s3xpress-modified-time. The x-amz-meta-s3xpress-modified-time-iso header contains the original
file timestamp in ISO format, while the x-amz-meta-s3xpress-modified-time header contains the original file
timestamp in HTTP format. You can see these two metadata headers using the command getmeta or ls -
showmeta.
- If an identical file (i.e. same MD5 value) is already stored on Amazon S3, the file is copied, not uploaded, to
save bandwidth. S3Express will show which files were copied (=duplicated) instead of uploaded. This
functionality can be disabled using -nomd5existcheck
mkfol FOLDER
Create S3 folder at current S3 working location.
FOLDER Name of the folder to be mkfol myfolder (create folder myfolder at current working
created. location)
A folder on S3 is an empty
object that ends with /
[BUCKET_NAM Name of the bucket / folder lsupl mybucket (list all in-progress multipart uploads in
E]/[FOLDER]/ whose in-progress multipart bucket mybucket)
uploads are to be listed. lsupl mybucket/subfolder/ (list all in-progress multipart
Uploads will be listed with name uploads in bucket mybucket, subfolder subfolder)
and upload ID.
-s Include all uploads, also in lsupl mybucket -s (list all in-progress multipart uploads in
subfolders. bucket mybucket, including in subfolders)
[BUCKET_NAM Name of the bucket / folder rmupl mybucket (remove all in-progress multipart
E]/[FOLDER]/ whose in-progress multipart uploads from bucket mybucket)
uploads are to be removed.
-id:UPLOADID Specify the multipart upload ID rmupl mybucket -id:
to be removed. VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRz
IHVwbG9hZA (remove multipart upload with ID
VXBsb2FkIElEIGZvciA2aWWpbmcncyBteS1tb3ZpZS5tMnRz
IHVwbG9hZA from bucket mybucket)
-file:FILE Specify the multipart upload file rmupl mybucket -file:file.txt (remove multipart upload with
name to be removed. name file.txt from bucket mybucket)
-s Remove uploads recursively in rmupl mybucket -s (remove all in-progress multipart
all subfolders too. uploads from bucket mybucket and from all subfolders of
mybucket)
[BUCKET_NA Name of the bucket, folder, del mybucket/*.txt (delete all objects with extension txt
ME]/ object(s) to delete. in mybucket)
[FOLDER]/ If not specified, objects in current del mybucket/myfolder/*.txt (delete all objects with
OBJECT location are deleted. extension txt in mybucket/myfolder/)
Wildcard character can be used del "mybucket/my folder/*.txt" (delete all objects with
(i.e. * and ?) like in Windows dir. extension txt in mybucket/my folder/, using quotation
If object name have spaces, they marks)
must be surrounded by
quotation marks (") on the
command line.
-s Recursive deleting, e.g. process del mybucket/* -s (delete all objects from mybucket)
all subfolders.
-r Regular expression. This flag del "mybucket/my folder/.*\.txt|.*\.vsn" -r (delete all
specifies that [BUCKET_NAME]/ objects with extension txt or vsn in mybucket/my
[FOLDER]/OBJECT must be folder/)
treated as a regular expression. del mybucket/^r.* -r (delete all objects starting with 'r'
in mybucket)
-sim Only preview which objects del mybucket/* -sim (preview object deletion from
would be deleted, do not actually mybucket)
delete the objects.
-stoponerror Stop deleting files as soon as an del mybucket/*.txt -stoponerror (delete all objects with
error occurs (do not continue extension txt in mybucket, stop if, and as soon, as an
with other files). error occurs, e.g. do not continue with other files)
-cond: Filter condition. Only delete del mybucket -s -cond:"extract_value(cache-
"FILTER" objects matching the specified control,'max-age') > 0" (delete all objects in mybucket
condition. More info on filter (recursive, include subfolders) with cache-control:max-
condition syntax and variables. age > 0 in the metadata)
del mybucket -s -cond:"size = 0" (delete all objects in
mybucket (recursive) of size equal to zero)
del mybucket -cond:"name starts_with 'a'" (delete all
objects in mybucket (non-recursive) with name starting
with a)
del mybucket -s -cond:"name starts_with 'a' and size >
0" (delete all objects in mybucket (recursive) with name
starting with a and size > 0)
More info on filter condition syntax and variables
-include:INCL Only include objects matching del mybucket -include:*.jpg (delete all jpg files in
specified mask (Wildcards). mybucket)
Separate multiple masks with "|". del mybucket -include:*.jpg|*.gif (delete all jpg and gif
files in mybucket)
-exclude: Exclude objects matching del mybucket -exclude:*.jpg (delete all files in mybucket
EXCL specified mask (Wildcards). but exclude jpg files)
Separate multiple masks with "|". del mybucket -exclude:*.jpg|*.gif (delete all files in
mybucket but exclude jpg files)
-rinclude: Only include objects matching del mybucket -rinclude:a(x|y|z)b (delete files in
INCL specified mask (Regular mybucket matching axb, ayb and azb)
Expression). del mybucket -rinclude:*.(gif|bmp|jpg) (delete files in
mybucket matching anything ending with .gif, .bmp or .
jpg)
del mybucket -rinclude:"IMGP[0-9]{4}.jpg" (delete files
in mybucket ending with .jpg , starting with IMG and
followed by a four-digit number)
-rexclude: Exclude objects matching del mybucket -rexclude:[abc] (delete all files in mybucket
EXCL specified mask (Regular but exclude files containing a, b or c)
Expression).
-inclenc Include only server-side del mybucket -inclenc (delete all files in mybucket that
-exclenc encrypted files. are server-side encrypted)
Exclude server-side encrypted del mybucket -exclenc (delete all files in mybucket that
files. are NOT server-side encrypted)
-inclrr Include only reduced redundancy del mybucket -inclrr (delete all files in mybucket that are
-exclrr files. reduced redundancy)
Exclude reduced redundancy del mybucket -exclrr (delete all files in mybucket that are
files. NOT reduced redundancy)
-inclia Include only infrequent access del mybucket -inclia (delete all files in mybucket that are
-exclia files. infrequent access)
Exclude infrequent access files. del mybucket -exclia (delete all files in mybucket that are
NOT infrequent access)
-inclgl Include only Glacier files. del mybucket -inclgl (delete all files in mybucket that are
-exclgl Exclude Glacier files. part of Amazon Glacier)
del mybucket -exclgl (delete all files in mybucket that are
NOT part of Amazon Glacier)
-inclle Include only client-side (locally) del mybucket -inclle (delete all files in mybucket that
-exclle encrypted files. were locally encrypted)
Exclude only client-side (locally) del mybucket -exclle (delete all files in mybucket that
encrypted files. were NOT locally encrypted)
-noconfirm:X Use -noconfirm to disable the del mybucket -s -noconfirm (delete all files in mybucket
deletion confirmation request and subfolders and do not ask for confirmation)
"Confirm deletion of ..." that del mybucket -s -noconfirm:100 (delete all files in
appears if more than 1 object is mybucket and subfolders and do not ask for
selected to be deleted. confirmation if 100 or less files are selected to be
You can use also -noconfirm with deleted. Ask for confirmation if more than 100 files are
a value, e.g. -noconfirm:10. This selected to be deleted.)
would disable the confirmation
request only for up to 10 files: if
more than 10 files are selected
to be deleted the confirmation
question is still shown.
-inclversions Include object previous versions del mybucket/*.txt -inclversions (delete all objects with
(for buckets with object extension txt in mybucket and also all previous versions
versioning enabled). of the objects)
-version:ID Specify version ID of object to be del mybucket/file.txt -version:23443232 (delete object
deleted (for buckets with object file.txt, object version ID 23443232, in mybucket)
versioning enabled).
Notes:
- Use quotation marks (") if folder or object names contain blank spaces, e.g. del "mybucket/my folder/
name with a space.txt"
- If multiple files are to be deleted, file deletion will be done using multiple concurrent threads. The maximum
threads to use can be specified with the command setopt , option -qmaxthreads
[BUCKET_NA Name of the bucket, folder, restore mybucket/*.txt -days:1 (restore all objects with
ME]/ object(s) to restore. extension txt in mybucket for 1 day)
[FOLDER]/ If not specified, objects in current restore mybucket/myfolder/*.txt -days:2 (restore all
OBJECT location are restored. objects with extension txt in mybucket/myfolder/ for 2
Wildcard character can be used days)
(i.e. * and ?) like in Windows dir. restore "mybucket/my folder/*.txt" -days:2 (restore all
If object name have spaces, they objects with extension txt in mybucket/my folder/ for 2
must be surrounded by days, using quotation marks)
quotation marks (") on the
command line.
-days:X Required. The number of days restore mybucket/myfile.txt -days:10 (restore myfile.txt
you want the object copy in mybucket for 10 days)
restored.
-tier:X Optional. When restoring restore mybucket/a.txt -days:5 -tier:Expedited
archived objects, you can specify restore *.jpg -s -days:10 -tier:Bulk
one of the following data access
tier options with the -Tier
parameter: Expedited | Standard
| Bulk.
The default value, if not
specified, is Standard.
See the Amazon S3
documentation for more
information.
-s Recursive restore, e.g. process restore mybucket/* -s -days:1 (restore all objects in
objects in all subfolders. mybucket and subfolders for 1 day)
-r Regular expression. This flag restore "mybucket/my folder/.*\.txt|.*\.vsn" -r -days:2
specifies that [BUCKET_NAME]/ (restore all objects with extension txt or vsn in
[FOLDER]/OBJECT is to be mybucket/my folder/ for 2 days)
treated as a regular expression. restore mybucket/^r.* -r -days:2 (restore all objects
starting with 'r' in mybucket for 2 days)
-sim Only preview which objects restore mybucket/* -sim -days:10 (preview object
would be restored, do not restore of all files in mybucket)
actually restore the objects.
-stoponerror Stop restoring files as soon as restore mybucket/*.txt -stoponerror -days:1 (restore all
an error occurs (do not continue objects with extension txt in mybucket for 1 day, stop if
with other files). and as soon as an error occurs, e.g. do not continue
with other files)
-noconfirm:X Use -noconfirm to disable the restore mybucket -s -noconfirm -days:7 (restore all files
restore confirmation request in mybucket and subfolders for 7 days and do not ask
"Confirm restore of ..." that for confirmation)
appears if more than 1 object is restore mybucket -s -noconfirm:100 -days:3 (restore all
selected to be restored. files in mybucket and subfolders for 3 days and do not
You can use also -noconfirm with ask for confirmation if 100 or less files are selected to be
a value, e.g. -noconfirm:10. This restored. Ask for confirmation if more than 100 files are
would disable the confirmation selected to be restored.)
request only for up to 10 files: if
more than 10 files are selected
to be restored the confirmation
question is still shown.
-cond: Filter condition. Only restore restore mybucket -s -cond:"extract_value(cache-
"FILTER" objects matching the specified control,'max-age') > 0" -days:1 (restore all objects in
condition. More info on filter mybucket (recursive, include subfolders) for 1 day if
condition syntax and variables. cache-control:max-age > 0 in the metadata)
restore mybucket -s -cond:"size = 0" - days:2 (restore
all objects in mybucket (recursive) if size equal to zero)
restore mybucket -cond:"name starts_with 'a'" - days:7
(restore all objects in mybucket (non-recursive) if name
starting with a)
restore mybucket -s -cond:"name starts_with 'a' and
size > 0" - days:10 (restore all objects in mybucket
(recursive) with name starting with a and size > 0)
More info on filter condition syntax and variables
-include:INCL Only include objects matching restore mybucket -include:*.jpg -days:10 (restore all jpg
specified mask (Wildcards). files in mybucket for 10 days)
Separate multiple masks with "|". restore mybucket -include:*.jpg|*.gif -days:1 (restore all
jpg and gif files in mybucket for 1 day)
-exclude: Exclude objects matching restore mybucket -exclude:*.jpg -days:1 (restore all files
EXCL specified mask (Wildcards). in mybucket but exclude jpg files)
Separate multiple masks with "|". restore mybucket -exclude:*.jpg|*.gif -days:1 (restore
all files in mybucket but exclude jpg files)
-rinclude: Only include objects matching restore mybucket -rinclude:a(x|y|z)b -days:1 (restore
INCL specified mask (Regular files in mybucket matching axb, ayb and azb)
Expression). restore mybucket -rinclude:*.(gif|bmp|jpg) -days:1
(restore files in mybucket matching anything ending with
.gif, .bmp or .jpg)
restore mybucket -rinclude:"IMGP[0-9]{4}.jpg" -days:1
(restore files in mybucket ending with .jpg , starting with
IMG and followed by a four-digit number)
-rexclude: Exclude objects matching restore mybucket -rexclude:[abc] -days:1 (restore all
EXCL specified mask (Regular files in mybucket but exclude files containing a, b or c)
Expression).
-inclenc Include only server-side restore mybucket -inclenc -days:1 (restore all files in
-exclenc encrypted files. mybucket that are server-side encrypted)
Exclude server-side encrypted restore mybucket -exclenc -days:1 (restore all files in
files. mybucket that are NOT server-side encrypted)
-inclgl Include only Glacier files. restore mybucket -inclgl -days:1 (restore all files in
-exclgl Exclude Glacier files. mybucket that are part of Amazon Glacier)
-inclle Include only client-side (locally) restore mybucket -inclle -days:1 (restore all files in
-exclle encrypted files. mybucket that were locally encrypted)
Exclude only client-side (locally) restore mybucket -exclle -days:1 (restore all files in
encrypted files. mybucket that were NOT locally encrypted)
-version:ID Specify version ID of object to be restore mybucket/file.txt -version:23443232 -days:2
restored (for buckets with object (restore object file.txt, object version ID 23443232, in
Notes:
- Use quotation marks (") if folder or object names contain blank spaces, e.g. restore "mybucket/my folder/
name with a space.txt"
- If multiple files are to be restored, the operation will be performed using multiple concurrent threads. The
maximum threads to use can be specified with the command setopt , option -qmaxthreads
19 Authorization Commands
The following commands are used to set, save, load, delete Amazon S3 authorizations, that is, Access Key
ID and Secret Access Key pairs.
Parameter Description
ACCESS_KEY_ID Required. This is the Amazon Access Key ID that S3Express should use to
connect to Amazon S3.
SECRET_ACCESS_KEY Required. This is the corresponding Amazon Secret Access Key that S3Express
should use to connect to Amazon S3.
NAME Optional. The name for this authorization. A name can be used to store
multiple authorizations in S3Express. If a name is not use the ACCESS_KEY_ID
and SECRET_ACCESS_KEY are saved without a name, as default
authorization.
loadauth [NAME]
Load a previously saved Access Key ID and Secret Access Key in S3Express for use.
Parameter Description
NAME Optional. The name of the authorization to load. If a name is not specified the
default ACCESS_KEY_ID and SECRET_ACCESS_KEY are loaded.
showauth [NAME]
Show previously saved Access Key ID and Secret Access Key in S3Express.
Parameter Description
NAME Optional. The name of the authorization to show. If a name is not specified
the default ACCESS_KEY_ID and SECRET_ACCESS_KEY are shown.
Note: You can show all authorizations saved in S3Express using command:
showauth <all>
rmauth [NAME]
Remove previously saved Access Key ID and Secret Access Key from S3Express.
Parameter Description
NAME Optional. The name of the authorization to remove. If a name is not specified
the default ACCESS_KEY_ID and SECRET_ACCESS_KEY are removed.
Note: You can remove all authorizations saved in S3Express using command:
rmauth <all>
Authorization Examples:
To save the Access Key ID and Secret Access Key pair FASWQDSDSSSZXAS1SA and
AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc as the default S3Express authorization use command:
saveauth FASWQDSDSSSZXAS1SA AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc
Note that by default S3Express loads the latest used authorization at startup automatically, so once
you saved the pair FASWQDSDSSSZXAS1SA and AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc using saveauth
FASWQDSDSSSZXAS1SA AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc, S3Express will automatically reload it
every time it starts, no need to use loadauth each time.
You can also save multiple authorizations in S3Express. For instance you could have:
saveauth FASWQDSDSSSZXAS1SA AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc
but also:
saveauth 1FASWQDSDSSSZXAS1SA 1AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc MYAUTH1
and
saveauth 2FASWQDSDSSSZXAS1SA 2AsFZEDy2BQfFSFzFfgKyyOF/xCaRcK4RMc MYAUTH2
You would then load the required authorization using the loadauth command ,e.g.:
loadauth
or
loadauth MYAUTH1
or
loadauth MYAUTH2
Note that by default S3Express loads the latest used authorization at startup automatically, so after
loading MYAUTH2, that authorization would be reloaded automatically at next S3Express startup.
20 setopt / showopt
The following commands are used to set (or reset) S3Express options.
forms:
Note: Once set, options are saved in the Windows Registry and are re-used every time S3Express
starts, unless changed again.
The encryption password is stored in the Windows Registry encrypted.
showopt
Show the current value for specific S3Express options or for all options.
To show specific options only, specify the options on the command line, e.g. showopt -verbosity or setopt -
setopt examples:
setopt -clientencpwd:mypassword
setopt -reset
setopt -timer:on
setopt -endpoint:s3-website-us-gov-west-1.amazonaws.com
setopt -proxyserver:marc:[email protected]:3128
setopt -proxyserver:10.0.1.254:443
setopt -proxyserver:auto
setopt -proxyserver:direct
setopt -protocol:http
setopt -protocol:https
setopt -clientencprogram:7zip
setopt -disablecertvalidation:on
showopt examples:
showopt
showopt -verbosity
setopt -proxyserver
Option Description
Examples:
Option Description
FILE_NAME A file name is required. This is the name of a text file, saved with
standard ASCII or UTF-8 encoding, that contains a list of S3Express
commands to be executed. Each command must be on a separate line. If
there are comment lines in the file, they must start with character #.
Examples
exec commands.txt (load and execute commands from file "commands.txt" in the current directory,
usually the same directory where S3Express.exe is, unless it was changed)
exec "c:\folder\subfolder A\my commands.txt" (load and execute commands from file "c:\folder\subfolder
A\my commands.txt")
# Put first folder. If error it will skip the other commands, because OnErrorSkip ON was set
put c:\folderA\ mybucket\folderA\ -s -onlydiff -purge
# finished
quit
# Put first folder. If error, it will NOT skip the other commands, because OnErrorSkip ON was not set.
put c:\folderA\ mybucket\folderA\ -s -onlydiff -purge
# Whatever happened, I do not want the exit code to report an error, use ResetErrorStatus
ResetErrorStatus
# finished
quit
Notes:
- If one of the commands in the text file fails (i.e. it returns an error), the other commands are still
executed, unless you specify OnErrorSkip ON.
- S3Express.exe returns 0 if all executed commands were successful. It will return 1 otherwise. You can
reset the error status with command ResetErrorStatus.
23 Other Commands
checkupdates
Check for program updates. This command opens a web browser and shows if there are more up-to-date versions
of S3Express available for download.
Example:
checkupdates
Option Description
Example:
md5 c:\folderA\test.txt
md5 "c:\folderA\name with spaces.txt"
mimetype EXTENSION
Show the default MIME type used by S3Express for a specific file extension.
Option Description
Example:
mimetype .exe
mimetype .html
mimetype .jpg
mimetype .css
OnErrorSkip ON/OFF
Set S3Express error handling behavior when processing multiple commands (see exec and scripting via
command line)
Option Description
Example:
OnErrorSkip ON
OnErrorSkip OFF
ResetErrorStatus
Reset error status to success when processing multiple commands (see exec and scripting via command line
)
Example:
ResetErrorStatus
ShowErrorStatus
Show current error status when processing multiple commands (see exec and scripting via command line)
Example:
ShowErrorStatus
pause SECONDS
Pause for the specified amount of seconds when processing multiple commands (see exec and scripting via
command line)
Option Description
Example:
pwd
Show current local working directory.
Example:
pwd
OPERATORS
<> , != Not equal year != 2013 (true if file's year is not equal to 2013)
month <> 1 (true if file's month is not equal to 1)
> , >=, <, Greater than, greater than or equal, sizemb >= 20 AND sizemb <= 30 (true if file's size in
<= smaller than, smaller than or equal MB is between 20 and 30)
+,-,* ,/ Addition, subtraction, multiplication, hour + 10 < 20 (true if file's hour plus 10 equal 20)
division
OR , AND Logical operators. Group parenthesis sizemb > 5 AND (year < 2010 OR year > 2013)
for precedence.
isoneof True if a value is contained in a list of day isoneof {1,2,3,4,5,6,7,8} (true if file's day is
values. The list of values must be 1,2,3,4,5,6,7 or 8)
enclosed in { } and each value month isoneof {1;3;7;12} (true if file's month is
separated by ; or , 1,3,7 or 12)
For text values, * can be used a name isoneof {'*.css','*.ini'} (true if file's name
wildcard to match any text. matches *.css or *.ini)
isnotoneof True if a value is NOT contained in a day isnotoneof {1,2,3,4,5,6,7,8} (true if day is not
list of values. The list of values must 1,2,3,4,5,6,7 or 8)
be enclosed in { } and each value name isnotoneof {'*.css','*.ini'} (true if file's name
separated by ; or , does not match *.css or *.ini)
contains True if a text value contains another name contains 'ab' (true if file's name contains 'ab'.
text value Note that comparison is case insensitive)
starts_with True if a text value starts with s3_name starts_with 'A' (true if S3 file's name starts
another text value with 'A' or 'a', comparison is case insensitive)
ends_with True if a text value ends with another s3_name ends_with 'Z' (true if S3 file's name ends
text value with 'Z' or 'z', comparison is case insensitive)
matches True if a text value matches another name matches '*.txt' OR name matches '*.jpg'
text value. Using * matches any text
at that location
regex_matc True if a text value matches another name regex_matches '^A(.*)|^B(.*)' (true if file's
hes text value using regular expression name starts with 'A' or 'B', match is case sensitive,
syntax. using regular expressions)
name regex_matches '(?i)^A(.*)|^B(.*)' (true if file's
name starts with 'A', 'B', 'a' or 'b'. (?i) makes the
match case insensitive)
LOCAL FILE VARIABLES (ONLY USABLE IN THE CONDITION FOR THE PUT COMMAND)
day The file's day of the month, in the range 1 put c:\folder\ mybucket -cond:"day = 15 OR
through 31 dayofweek = 1" (upload all files in c:\folder\
that were lastly modified on the 15th day of
the month or on the first day of the week)
month The file's month of the year, in the range put c:\folder\ mybucket -cond:"month =
1 through 12 (1 = January, 12 = 1" (upload all files in c:\folder\ that were lastly
December) modified date in January)
year The file's year, 1970 to 2038 put c:\folder\ mybucket -cond:"year <>
2012" (upload all files in c:\folder\ that were
not lastly modified in 2012)
date The file's date: 'YYYY-MM-DD' date = '2012-04-03' (true if file's date is '2012-
04-03')
date matches '2012-04-*' (true if file's date is
in April 2012)
Note that date is a text value and values must
be surrounded by two '
time The file's time: 'HH:MM:SS' time matches '08:12:10' (true if file's time
matches '08:12:10')
hour The file's hour, in the range 0 through 23 put c:\folder\ mybucket -cond:"hour >= 0 AND
hour <= 7" (upload all files in c:\folder\ that
were lastly modified between midnight and
7:59 am)
minute The file's minute, in the range 0 through put c:\folder\*.jpg mybucket -cond:"minute >=
59 30" (upload all JPG files in c:\folder\ that were
lastly modified in the second half of each hour)
second The file's second, in the range 0 through put c:\folder\*.jpg mybucket -cond:"second >=
59 30" (upload all JPG files in c:\folder\ that were
lastly modified in the second half of each
minute)
dayofweek The file's day of the week (0 = Sunday, 1 dayofweek = 0 (true if file's modified timestamp
= Monday, etc. to 6 = Saturday) is on a Sunday).
dayofweek = 0 OR dayofweek = 6 (true for
Sundays and Saturdays).
dayofweek <> 1 (true if file's day of week is not
Monday).
dayofyear The file's day of the year. Number from 1 dayofyear = 32 (true if file's modified timestamp
to 366 is on 1st of February)
dayofyear % 2 = 0 (true file's modified
timestamp is on even days)
weeknumber The file's week number (ISO 8601) weeknumber = 32 (true if file's timestamp is on
the 32nd week of the year)
endofmonth The file's end of month's date: 'YYYY-MM- endofmonth = "2012-04-30" (true if file's
DD' timestamp is in April 2012)
endofmonth matches "*-*-30" (true if file's
timestamp is in a month with 30 days)
age_days The file's age in days. put c:\folder\*.gif mybucket -cond:"age_days <
21" (upload all GIF files in c:\folder\ that were
lastly modified less than 21 days ago)
age_hours The file's age in hours. age_hours > 1 (true if file is more than 1 hour
old)
age_mins The file's age in minutes. age_mins <= 30 (true if file is less than 30
minutes old)
age_secs The file's age in seconds. age_secs >= 2000 (true if file is more than
2000 seconds old)
path Local file path (includes folder) put c:\folder\ mybucket -cond:"path contains
'\subfolder\'" (upload all files in c:\folder\
whose path contains '\subfolder\', e.g. in the
subfolder 'subfolder')
extension Local file extension put c:\folder\ mybucket -cond:"extension <> '.
db'" (upload all files in c:\folder\ with file
extension different from '.db')
stem Local file stem (file name without put c:\folder\ mybucket -cond:"stem <>
extension) 'Read'" (upload all files in c:\folder\ with file
sizeKB File size in Kilobytes (KiB) put c:\folder\ mybucket -cond:"sizeKB >
200" (upload all files in c:\folder\ whose size is
larger than 200 Kilobytes)
md5 (or MD5 of the file (also referred to as 'etag') put c:\folder\ mybucket -cond:"md5 <>
etag) s3_md5" (upload all files in c:\folder\ whose
MD5 value , i.e. etag, is different from the MD5
value, i.e. etag, of the corresponding file on S3)
File Time /
Date Explanation Example
Variables
s3_day The S3 object's day of the month, in the ls mybucket\folder\ -cond:"s3_day = 15" (list all
range 1 through 31 files in c:\folder\ that were lastly modified on
the 15th day of the month)
s3_month The S3 object's month of the year, in the del mybucket -s -cond:"s3_month =
range 1 through 12 (1 = January, 12 = 1" (recursively delete all files in bucket
December) 'mybucket' that were lastly modified in a
January)
s3_year The S3 object's year, 1970 to 2038 ls mybucket -s -cond:"s3_year <> 2012" (list all
files in mybucket that were not lastly modified
in 2012)
s3_date The S3 object's date: 'YYYY-MM-DD' s3_date = '2012-04-03' (true if S3 file's date is
'2012-04-03')
s3_date matches '2012-04-*' (true if S3 file's
date is in April 2012)
Note that s3_date is a text value and values
must be surrounded by two '
s3_time The S3 object's time: 'HH:MM:SS' s3_time matches '08:12:10' (true if file's time
matches '08:12:10')
s3_hour The S3 object's hour, in the range 0 ls mybucket -s -cond:"s3_hour > 8" (list all files
through 23 in mybucket that were lastly modified after 8)
s3_minute The S3 object's minute, in the range 0 ls mybucket -s -cond:"s3_minute < 30 " (list all
through 59 files in mybucket that were lastly modified in
the first half of the hour)
s3_endofmon The S3 object's end of month's date: s3_endofmonth = "2012-04-30" (true if S3 file's
th 'YYYY-MM-DD' timestamp is in April 2012)
s3_endofmonth matches "*-*-30" (true if S3
file's timestamp is in a month with 30 days)
s3_age_hour The S3 object's age in hours. ls mybucket -cond:"s3_age_hours > 1" (list all
s S3 files in mybucket that are older than 1 hour)
insensitive)
s3_stem S3 object's stem (name without ls -s -cond:"s3_stem = 'check'" (list all S3 files
extension) with stem matching check. Stem = file name
without extension. Operator = is case
sensitive)
s3_md5 (or S3 object's MD5 (also referred to as put * mybucket -s -cond:"etag <>
s3_etag) 'etag') s3_etag" (upload all files in current folder and
subfolders to bucket mybucket if their etag is
different from the corresponding S3 etag)
s3_object_m The S3 object's max-age value as ls mybucket -s -cond:"max-age > 0 " (list all
ax_age specified in the cache-control header files in mybucket that have max-age in the
cache-control header greater than zero)
expiry_month If a S3 object has an expiry time and ls mybucket -s -cond:"expiry_months < 10" (list
s, date set in the x-amz-expiration header, all objects in mybucket that will expire in less
expiry_days, then these values contain the number of than 10 months)
expiry_hours, months, days, hours, minutes and ls mybucket -s -cond:"expiry_days > 100" (list
expiry_mins, seconds until the object expires all objects in mybucket that will expire in more
expiry_secs than 100 days)
restore_expir Sunday, 1 = Monday, etc. to 6 = restore_expiry_year > 2015" (list all Glacier
y_day, Saturday), day of year (1 to 366), end of restored objects in mybucket that will expire
restore_expir month (YYYY-MM-DD), week number (ISO after December 2015)
y_dayofweek 8601), hour (0 to 23), minute (0 to 59),
, second (0 to 59), time (HH:MM:SS) and
restore_expir timestamp (total seconds since epoch)
y_dayofyear,
restore_expir
y_endofmont
h,
restore_expir
y_weeknumb
er,
restore_expir
y_hour,
restore_expir
y_minute,
restore_expir
y_second,
restore_expir
y_time,
restore_expir
y_timestamp
Time
Explanation Example
Variables
curr_day The current day of the month, in the curr_day = 15 OR curr_dayofweek = 1 (true on
range 1 through 31 Mondays or on the 15th of every month).
curr_month The current month of the year, in the curr_month = 1 (true in January).
range 1 through 12 (1 = January, 12 =
December)
curr_year The current year, 1970 to 2038 curr_year <> 2012 (true if the current year is
not 2012).
curr_time Today's time: "HH:MM:SS" curr_time matches '08:1*:*' (true if the current
time is in the first 10 minutes of the 8th hour)
curr_hour The current hour, in the range 0 through curr_hour >= 0 AND curr_hour <= 7 (true
23 between midnight and 7:59 am).
curr_minute The current minute, in the range 0 curr_minute >= 30 (true in the second half of
through 59 each hour).
curr_second The current second, in the range 0 curr_second >= 30 (true in the second half of
through 59 each minute).
curr_dayofye Current day of the year. Number from 1 curr_dayofyear = 32 (true on the 1st of
ar to 366 February)
curr_dayofyear % 2 = 0 (true on even days)
curr_weeknu Current week number (ISO 8601) curr_weeknumber = 32 (true on the 32nd week
mber of the year)
curr_endofm Current end of month's date: "YYYY-MM- curr_endofmonth = '2012-04-30' (true in April
onth DD" 2012)
curr_endofmonth matches '*-*-30' (true if
today's month has 30 days)
is_lowercase Returns true if Text is lower case, false ls -s -cond:"is_lowercase(s3_path) = false" (list
(Text) otherwise all objects in all buckets that are not lower
case)
pad(Text, Left pads Text with Char up to Length pad(s3_month, 2, '0') returns s3_month with 0
Length, padding, i.e. 1 becomes 01, 2 becomes 02, etc,
Char) but 12 remains 12.
concat(Text1, Concatenates two text strings into one concat(concat(s3_year, pad(s3_month, 2, '0')),
Text2) pad(s3_day, 2, '0'))
mid(Text, Returns part of Text between Start and mid('Sydney', 3, 2) returns 'dn'
Start, End) End mid('Sydney', 1, 5) returns 'Sydne'
mid('Sydney', 4, 100) returns 'ney'
value(Text) Returns the numerical value of Text value(mid('max-age=2000',9,0)) > 1999 returns
true
value('2000') returns 2000
extract(Text, Extracts part of text from Text using extract('Max-age=2000', 'max-age *= *(.*)')
Regex) regular expression Regex returns '2000'
25 Command Shortcuts
Command shortcuts are useful to map long commands to short tags and make it easy to re-issue the same
command again and again.
For instance, if you use S3Express to backup your files to S3, instead of having to retype a long command
each time, you can simply define a command shortcut once and use it as needed , like this:
c1 put c:\folder\ mybucket -onlydiff
This assigns the command 'put c:\folder\ mybucket -onlydiff' to shortcut c1.
Next time, instead of having to type put c:\folder\ mybucket -onlydiff just type c1 at the prompt and
S3Express will issue the command put c:\folder\ mybucket -onlydiff for you.
Command shortcuts are customizable and will be remembered each time S3Express starts. Once a shortcut
is assigned you can re-use it also the next time S3Express starts.
You can re-assign shortcuts or reset them when they are no longer needed.
Examples
c1 put "c:\folder A\" mybucket -onlydiff -keep (assign command 'put "c:\folder A\" mybucket -onlydiff -
keep' to shortcut c1. Then just type c1 to issue command 'put "c:\folder A\" mybucket -onlydiff -keep')
c1 (execute shortcut command c1. This will execute 'put "c:\folder A\" mybucket -onlydiff -keep')
26 Command Variables
Command variables are variables that can be used with all S3Express commands. Variables are substituted
with the real values just before executing a command. Variables must be surrounded by <* and *>.
Variables Explanation
Windows Environment Any set Windows Environment Variable can be used as a command variable,
Variables: but '<*' and '*>' must be used instead of '%'.
<*environment_variable*
> For example:
%ALLUSERSPROFILE% becomes <*ALLUSERSPROFILE*>
%COMPUTERNAME% becomes <*COMPUTERNAME*>
%HOMEPATH% becomes <*HOMEPATH*>
%WINDIR% becomes <*WINDIR*>
etc.
Upload files to a different subfolder in bucket 'mybucket' every day of year (rotating):
put c:\folder\ mybucket/<*dayofyear*> -s
<*dayofyear*> is replaced with the current day of the year, number from 1 to 366, depending on when
S3Express is running
Upload files to a different subfolder in bucket 'mybucket' based on computer name running S3Express:
put c:\folder\ mybucket/<*computer*> -s
<*computer*> is replaced with the name of the computer running S3Express
27 Multipart Uploads
In order to make it faster and easier to upload larger objects, Amazon S3 has introduced the new
multipart upload feature and S3Express fully supports this feature.
By specifying the flag -mul when uploading files with the command put, S3Express will break your files into
chunks and upload them separately. You can instruct S3Express to upload a number of chunks in parallel
using the flag -t. If the upload of one single chunk fails, you can simply restart the upload and S3Express
will restart from the last successful chunk instead of having to re-upload the entire file. If you do not want
to restart an unfinished multipart upload, use the command rmupl to remove the upload.
Note: S3Express will also automatically apply the correct MD5 value when uploading files using multipart
uploads: many S3 tools are unable to do that.
But alternatively you can also run S3Express in automated mode, by passing commands to execute on the
S3Express command line or via a text file.
The S3Express command line supports the following flags:
Command Line
Description Example
Flag
Note:
When using double quotation marks inside double quotation marks on the command line, the internal double
quotation marks must be escaped by a \.
For example:
s3express "put \"g:\folder\folder with space\" \"bucket\folder with space\" -s -onlydiff -t:16 -minoutput -
mul:50" -nm -exit
put "G:\folder\folder with space" "bucket\folder with space" -s -onlydiff -t:16 -minoutput -mul:50
quit
Exit Codes
S3Express.exe returns 0 if all executed commands were successful. It will return 1 otherwise.
29 Exit Codes
S3Express.exe returns 0 if all executed commands were successful. It will return 1 otherwise.
When the -tofile parameter is used, the output of a command is redirected from the screen to a file. If the
file already exists, it is overwritten. For example:
ls mybucket -s -tofile:c:\output.txt
put c:\folder\ mybucket -tofile:x:\folder\output.txt
If no file path is specified, but only the file name, then the file is created in the current directory:
ls mybucket -s -tofile:output.txt
To append the output to an existing file, add a '+' at the end of the file name, e.g.:
ls mybucket -s -tofile:output.txt+
Alternatively, you can also redirect the whole program output to a file using the command line, e.g.
S3Express "put c:\folder\* bucket -onlydiff" "ls bucket" q > output.txt
or to append to an existing file:
S3Express "put c:\folder\* bucket -onlydiff" "ls bucket" q >> output.txt
Step 2: Select the 'Lucida Console' font from the 'Font' tab.
32 Examples
Following are some examples showing S3Express functionality.
Buckets
Create a bucket.
mkbkt mybucket
List Objects
List all objects in 'mybucket', including in all subfolders, that have .txt extension. Include MD5 values in the
output.
ls mybucket/*.txt -s -md5
List all objects in 'mybucket', including in all subfolders, that have .txt extension. Include MD5 values, metadata
and ACL in the output, in extended format (-ext). Note that showing metadata and/or ACL is slower as each
object must be queried separately.
ls mybucket/*.txt -s -md5 -showmeta -showacl -ext
List all objects in 'mybucket', including in all subfolders, that are server-side encrypted.
ls mybucket -s -inclenc
List all objects in 'mybucket', including in all subfolders, that are server-side encrypted.
ls mybucket -s -inclenc
List all objects in 'mybucket', but not in subfolders, that have the header cache-control:max-age set to 60.
Show metadata in the output (-showmeta).
ls mybucket -cond:"extract_value(cache-control,'max-age')=60" -showmeta
List all objects in 'mybucket', subfolders 'mysubfolder', that do not have the cache-control header. Show
metadata in the output (-showmeta).
ls mybucket/mysubfolder/ -cond:"cache-control=''" -showmeta
List all objects in 'mybucket', subfolders 'mysubfolder', that have the cache-control header. Show metadata in
the output (-showmeta).
ls mybucket/mysubfolder/ -cond:"cache-control !='' " -showmeta
List all objects in 'mybucket', subfolders 'mysubfolder', that are larger than 5 Megabytes.
ls mybucket/mysubfolder/ -cond:"s3_sizeMB>5"
List all objects in 'mybucket', subfolders 'mysubfolder', that are larger than 5 Megabytes and have extension .txt
or .gif or .jpg
ls mybucket/mysubfolder/ -cond:"s3_sizeMB>5" -include:"*.txt|*.jpg|*.gif"
List all objects in 'mybucket', including in all subfolders, that do not start with a, b or c (using regular
expressions).
ls mybucket -s -rexclude:"^(a.*)|(b.*)|(c.*)"
Note that in the example above -rexclude uses the object name to match. To match against the entire object
path, use the s3_path variable, e.g.
ls mybucket -s -cond:"s3_path regex_matches '^(a.*)|(b.*)|(c.*)' = false"
List all objects in 'mybucket', including in all subfolders, that are not 'private' ('private' means that owner gets
FULL_CONTROL and no one else has access rights).
ls mybucket -s -cond:"s3_acl_is_private = false"
List all objects in 'mybucket', including in all subfolders, that are not 'public-read' ('public-read' means that
owner gets FULL_CONTROL and the AllUsers group gets READ access).
ls mybucket -s -cond:"s3_acl_is_public_read = false"
List all objects in 'mybucket', including in all subfolders, and group output by object's extension.
ls mybucket -s -grp:ext
Show a summary of all objects in 'mybucket', including in all subfolders, which have the cache-control:max-age
value greater than 0, and group the output by cache-control header value. Do not show each object, just a
summary (-sum parameter).
ls mybucket -s -sum -grp:cache-control -cond:"extract_value(cache-control,'max-age')>0"
Upload all files in c:\folder\ and its subfolders to mybucket using 3 parallel threads and multipart uploads.
Throttle bandwidth to 50Kb/s. Make all uploaded files 'public-read' and set cache-control header to max-age=60.
put c:\folder\ mybucket -s -t:3 -mul -maxb:50 -cacl:public-read -meta:"cache-control:max-age=60"
Upload all files from c:\folder\ to mybucket and apply S3 server-side encryption for all uploaded files.
put c:\folder\ mybucket -e
Upload all files from c:\folder\ to mybucket. Before uploading, apply client-side local encryption.
put c:\folder\ mybucket -le
Upload non-empty files from c:\folder\ to mybucket and keep metadata and ACL of files that are overwritten.
put c:\folder\ mybucket -cond:"size <> 0" -keep
Upload only changed or new files from c:\folder\ to mybucket and keep metadata and ACL of files that are
overwritten.
Changed files are files that have changed, that is they have different MD5 hash. New files are files that are not
yet present on the S3 bucket.
Options -onlynew (upload only new files), -onlynewer (upload only files that have a newer timestamp) and -
onlyexisting (re-upload only files that are already present on S3) are also available.
put c:\folder\ mybucket -onlydiff -keep
Upload only changed or new files from c:\folder\ to mybucket. Purge (=delete) S3 files in mybucket that are no
longer present in c:\folder\. Keep output to console to minimum (-minoutput).
put c:\folder\ mybucket -s -onlydiff -purge -minoutput
Upload all *.jpg and *.gif files from c:\folder\ to mybucket, only if the are already existing on S3.
put c:\folder\ mybucket -include:*.jpg|*.gif -onlyexisting
Upload all *.jpg and *.gif files from c:\folder\ to mybucket, only if files are already existing on S3. Simulation
(=preview) only, shows list of files that would be uploaded.
put c:\folder\ mybucket -include:*.jpg|*.gif -onlyexisting -sim
Delete files in mybucket, including subfolders, that have cache-control:max-age > 0 in the metadata.
del mybucket/* -s -cond:"extract_value(cache-control,'max-age') > 0"
Delete files in mybucket, including subfolders, with name starting with 'a'. Stop deleting files as soon as an error
occurs.
del mybucket/* -s -cond:"name starts_with 'a'" -stoponerror
Delete previous versions of files in mybucket, which are older than 6 months, including subfolders.
del mybucket/* -s -onlyprev -cond:"s3_age_months>6"
Copy file mybucket/myfile.txt to mybucket/subfolder/myfile.txt and copy also metadata and ACL from source
mybucket/myfile.txt to target mybucket/subfolder/myfile.txt.
copy mybucket/myfile.txt mybucket/subfolder/myfile.txt -keep
Set header x-amz-meta-test=yes to all files in mybucket/subfolder/ that have extension *.exe or *.rpt.
setmeta mybucket/subfolder/* -meta:"x-amz-meta-test=yes" -include:"*.exe|*.rpt"
Set server-side encryption header (= encrypt files) to all files in mybucket/subfolder/ that are larger than 5MB
and do not have extension *.exe or *.rpt.
setmeta mybucket/* -e:+ -cond:"size_mb > 5" -exclude:"*.exe|*.rpt"
Get the metatdata of the file.txt and show ALL metadata headers.
getmeta file.txt
Get metatdata of the file.txt, but show only the cache-control header in the output.
getmeta file.txt -showmeta:"cache-control"
Get metatdata of the file.txt, but show only the cache-control header and the x-amz-server-side-encryption
header in the output.
getmeta file.txt -showmeta:"cache-control|x-amz-server-side-encryption"
Set canned ACL 'private' to all jpg files in mybucket, including in subfolders of mybucket.
setacl mybucket/*.jpg -s -cacl:private
Set canned ACL 'public-read-write' to all txt files in mybucket, including in subfolders of mybucket
setacl mybucket/*.txt -s -cacl:public-read-write
List all objects in 'mybucket', including in all subfolders, that are not 'public-read' ('public-read' means that
owner gets FULL_CONTROL and the AllUsers group gets READ access). The ls command is used.
ls mybucket -s -cond:"s3_acl_is_public_read = false"
Other
Most Viewed:
How Do I Backup to Amazon S3 with S3Express?
PDF Tutorial 'Backup Files to Amazon S3 with S3Express'
Release History
Security Considerations
S3Express How-To
Buy Page:
www.s3express.com/buy.htm
S3Express Homepage:
www.s3express.com
Note: The License Key and Text will be delivered via e-mail and you will then enter them in S3Express
to unlock the trial.
1) Start S3Express
Start S3Express.exe
36 License Agreement
This S3Express Software License Agreement ("S3EXPSLA") is a legal agreement between you (either an
individual or a single entity) and TGRMN Software for the S3Express software product, which includes
computer software and associated media and printed materials, and may include "online" or electronic
documentation ("SOFTWARE PRODUCT" or "SOFTWARE").
By installing, copying, or otherwise using the SOFTWARE PRODUCT, you agree to be bound by the terms of
this S3EXPSLA. If you do not agree to the terms of this S3EXPSLA, uninstall the SOFTWARE PRODUCT and
promptly return the unused SOFTWARE PRODUCT to the place from which you obtained it for a full refund.
This SOFTWARE PRODUCT is protected by copyright laws and international copyright treaties, as well as
other intellectual property laws and treaties. The SOFTWARE PRODUCT is licensed, not sold.
Grant of License
This license agreement grants you the following rights:
You may install and use one copy of the SOFTWARE PRODUCT on a single computer at a time and only by
one user at a time.
You may install and use copies of the SOFTWARE PRODUCT, up to, but not to exceed, the number of
licenses shown on your purchase record.
Each primary user of the SOFTWARE PRODUCT specified above, may also install and use an additional copy
of the SOFTWARE PRODUCT on a portable device or home computer (not both), providing this copy is not
used concurrently with the primary copy.
Network Storage/Use - You may also store or install a copy of the SOFTWARE PRODUCT on a storage
device, such as a network server, used only to install or run the SOFTWARE PRODUCT on your other
computers over an internal network; however, you must acquire and dedicate a license for each separate
computer on which the SOFTWARE PRODUCT is installed or run from the storage device.
Concurrent Use - A license for the SOFTWARE PRODUCT may not be used concurrently on different
computers.
Distribution
Provided that you verify that you are distributing the evaluation version, you are hereby licensed to make as
many copies of the evaluation version of this SOFTWARE PRODUCT and documentation as you wish, give
exact copies of the original evaluation version to anyone, and distribute the evaluation version of the
SOFTWARE and documentation in its unmodified form via electronic means. There is no charge for any of the
above.
You are specifically prohibited from charging or requesting donations for any such copies however made,
and from distributing the SOFTWARE PRODUCT and/or documentation with other products (commercial or
otherwise) without prior written permission.
Disclaimer of Warranty
THIS SOFTWARE PRODUCT AND THE ACCOMPANYING FILES ARE SOLD "AS IS" AND WITHOUT WARRANTIES AS
TO PERFORMANCE OF MERCHANTABILITY OR ANY OTHER WARRANTIES WHETHER EXPRESSED OR IMPLIED. IN
NO EVENT WILL TGRMN SOFTWARE BE LIABLE TO YOU FOR ANY CONSEQUENTIAL, INCIDENTAL OR SPECIAL
DAMAGES, INCLUDING ANY LOST PROFITS OR LOST SAVINGS. Because of the various hardware and software
environments into which S3Express may be put, NO WARRANTY OF FITNESS FOR A PARTICULAR PURPOSE IS
OFFERED.
Good data processing procedure dictates that any program be thoroughly tested with non-critical data
before relying on it. The user must assume the entire risk of using the SOFTWARE. ANY LIABILITY OF THE
SELLER WILL BE LIMITED EXCLUSIVELY TO PRODUCT REPLACEMENT OR REFUND OF PURCHASE PRICE.
Restrictions
You agree not to modify, adapt, translate, reverse engineer, decompile, disassemble or otherwise attempt
to discover the source code of the SOFTWARE. You may not use, copy, modify or transfer copies of the
SOFTWARE except as provided in this licence. You may not decompile, disassemble, or create derivative
works based upon the SOFTWARE. You may not modify, adapt, translate, or create derivative works based
upon the written documentation. You may not sub-license, rent, lease, sell or assign the SOFTWARE to
others.
Governing Law
This Agreement shall be governed by, construed and enforced in accordance with the internal substantive
laws (and not the laws of choice of laws) of South Australia, Australia, without giving effect to the conflict of
laws provisions. Sole venue shall be in the applicable state and federal courts of South Australia.