Simple Gradle plugin that uploads and downloads S3 objects. This is a fork of
the gradle-s3-plugin which
is a fork of mgk/s3-plugin, which both
appear to be no longer under active development. It has been updated to work with
Gradle version 6 and later.
Add the following to your build.gradle file:
plugins {
id 'com.fuseanalytics.gradle.s3' version '1.2.6'
}
This project uses semantic versioning
See gradle plugin page for other versions.
The S3 plugin searches for credentials in the same order as the AWS default credentials provider chain. Additionally you can specify a credentials profile to use by setting the project s3.profile
property:
s3 {
profile = 'my-profile'
}
Setting the environment variables AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
is one way to provide your S3 credentials. See the AWS Docs for details on credentials.
The s3.region
property can optionally be set to define the Amazon EC2 region if one has not been set in the authentication profile. It can also be used to override the default region set in the AWS credentials provider.
s3 {
region = 'us-east-1'
}
The s3.bucket
property sets a default S3 bucket that is common to all tasks. This can be useful if all S3 tasks operate against the same Amazon S3 bucket.
s3 {
bucket = 'my.default.bucketname'
}
The following Gradle tasks are provided.
Uploads one or more files to S3. This task has two modes of operation: single file upload and directory upload (including recursive upload of all child subdirectories). Properties that apply to both modes:
bucket
- S3 bucket to use (optional, defaults to the projects3
configured bucket)
For a single file upload:
key
- key of S3 object to createfile
- path of file to be uploadedoverwrite
- (optional, default isfalse
), iftrue
the S3 object is created or overwritten if it already exists.then
- (optional), callback closure called upon completion with the java.io.File that was uploaded.
By default S3Upload
does not overwrite the S3 object if it already exists. Set overwrite
to true
to upload the file even if it exists.
For a directory upload:
keyPrefix
- root S3 prefix under which to create the uploaded contentssourceDir
- local directory containing the contents to be uploadedthen
- (optional), callback closure called upon completion with each java.io.File that was uploaded.
A directory upload will always overwrite existing content if it already exists under the specified S3 prefix.
Downloads one or more S3 objects. This task has two modes of operation: single file download and recursive download. Properties that apply to both modes:
bucket
- S3 bucket to use (optional, defaults to the projects3
configured bucket)
For a single file download (deprecated - use keys parameter instead):
key
- key of S3 object to downloadfile
- local path of file to save the download tothen
- (optional), callback closure called upon completion with the java.io.File that was downloaded.
For a recursive download (deprecated - use keys parameter instead):
keyPrefix
- S3 prefix of objects to downloaddestDir
- local directory to download objects tothen
- (optional), callback closure called upon completion with each java.io.File that was downloaded.
For multiple download:
keys
- an Iterable of an individual file(s), key prefix ending with an asteriks (ie /dir1/dir2/logs*), or a whole directory (ie /dir1/). Directories MUST end in a trailing forward slash (ie /)destDir
- local directory to download objects intothen
- (optional), callback closure called upon completion with each java.io.File that was downloaded.
Note:
Recursive downloads create a sparse directory tree containing the full keyPrefix
under destDir
. So with an S3 bucket
containing the object keys:
top/foo/bar
top/README
a recursive download:
task downloadRecursive(type: S3Download) {
keyPrefix = 'top/foo/'
destDir = 'local-dir'
then = { File file ->
// do something with the file
}
}
results in this local tree:
local-dir/
└── top
└── foo
└── bar
So only files under top/foo
are downloaded, but their full S3 paths are appended to the destDir
. This is different from the behavior of the aws cli aws s3 cp --recursive
command which prunes the root of the downloaded objects. Use the flexible Gradle Copy task to prune the tree after downloading it.
For example:
def localTree = 'path/to/some/location'
task downloadRecursive(type: S3Download) {
bucket = 's3-bucket-name'
keyPrefix = "${localTree}"
destDir = "${buildDir}/download-root"
}
// prune and re-root the downloaded tree, removing the keyPrefix
task copyDownload(type: Copy, dependsOn: downloadRecursive) {
from "${buildDir}/download-root/${localTree}"
into "${buildDir}/pruned-tree"
}
Downloads report percentage progress at the gradle INFO level. Run gradle with the -i
option to see download progress.