๐ซ How to reach me: 650-248-0123
timkay / aws Goto Github PK
View Code? Open in Web Editor NEWEasy command line access to Amazon EC2, S3, SQS, ELB, and SDB
Home Page: http://timkay.com/aws/
Easy command line access to Amazon EC2, S3, SQS, ELB, and SDB
Home Page: http://timkay.com/aws/
It would be nice for some s3 batch operations to have a --max-keys option that returns only the number of keys asked for in, eg, 'ls'. If I have a bucket that has 100,000 keys, I don't want to fetch them all at once then iterate, I'd rather fetch 1000 at once, then iterate, so it doesn't spend forever just fetching the keys. Obviously this only works where the file list is actually changing, or else you just get the same 1000 the second time.
I added a lame version to mine, but it is very lame. midendian@73826c2
It looks like it can take --marker from the command line, which could do what I want somehow, do you know off-hand how?
Amazon AWS has a feature to scale the system automatically. The documentation is provided at
http://aws.amazon.com/documentation/autoscaling/
Feel free to add this feature to our jifty tool. Thank you.
I wrote a 10-line patch to provide JSON support in the aws
tool. Basically when --json is called, it opportunistically "requires" XML::Simple and JSON, and transforms $xml into one line of JSON input. The modules are not loaded otherwise. Are you interested in this patch or do you want to stick just with XML and YAML or do you like it, but without the module dependency?
Thanks!
"aws put" routinely fails partway through uploading a file (a separate bug that I'm too lazy to diagnose), so we wrap it in a while loop. Good enough for government work.
That worked great with version 1.49, which would exit non-zero in that case. 1.75 seems to exit with rc=0 regardless of whether it succeeds or fails:
$ for a in *zip; do echo $a ; until aws-1.49 --progress --public put bucket $a ; do echo "Trying again." ; done ; done
foo.zip
################################### 49.7%
Trying again.
######################################################################## 100.0%
$
vs.
$ for a in *zip; do echo $a ; until aws-1.75 --progress --public put bucket $a ; do echo "Trying again." ; done ; done
foo.zip
#################################################################### 94.6%
$
We've been using aws to back up our files for quite a while now. Suddenly today, it started returning the error:
505 HTTP Version Not Supported
I tried checking out the newest version of the code, (we were on a REALLY old revision), but got the same error.
From a bit of googling around, it seems this error code can mean a bunch of different things. Not sure if there's anything I can do to fix the issue or even to provide more debug output. Is anybody else suddenly getting this error as well?
For what it's worth, it seems to only be broken on ls, not on put.
Tim Kay,
First of all, thanks for this great tool.
aws describe-instances [InstanceId...] does not show the user-data field.
I use this field to assign a friendly name to the instance, wich I than use to assign friendly names to snapshots created of volumes attached on this instance.
Am I overlooking an command-line option to retrieve this information?
Till now I have been using
ec2-describe-instance-attribute [InstanceId...] --user-data
to retrieve this information, but I would be much happier using your lean & mean 'aws' tool.
If it is not currently possible with your tool, would you consider adding it?
Sounds like a 'describe-instance-attribute' option to me.
Do you have a temporary 'curl commandline' alternative that could be used to query this information?
Thank you in advance.
Justin
I can't quite figure out what's going on (the code looks to do the right thing), but I'm leaving a public note because the error is inobvious: For S3 results that come in many parts and take a long time (several minutes), eventually aws will die with 22 and print "403 Forbidden".
Adding a large --expire-time value fixes it.
I have an aws put
as part of an automated buildbot script, and typically when something goes wrong, aws
will return a nonzero exit status so that I can stop the rest of my script from running. In this case however, curl
looks like it failed, but aws
still returned zero. The output from curl says:
curl: (56) SSL read: error:00000000:lib(0):func(0):reason(0), errno 104
The command used to invoke aws
was:
~/bin/aws put 'x-amz-acl: public-read' julianightlies/bin/osx/x64/0.4/julia-0.4.0-5039cb1011-osx.dmg /tmp/julia_package/Julia-0.4.0-dev-5039cb1011.dmg
Over at Transloadit we're happy users of this project. One of our customers tried to upload something to the new Frankfurt datacenters, and got the following error:
+----------------+----------------------------------------------------------------------------------------------+------------------+
| Code | Message | RequestId |
+----------------+----------------------------------------------------------------------------------------------+------------------+
| InvalidRequest | The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. | 8779D556E0813FCE |
+----------------+----------------------------------------------------------------------------------------------+------------------+
As it turns out, this region only supports AWS Signature Version 4, and is not backwards compatible.
Any chance of supporting AWS Signature Version 4?
I always get exit code 0 from these two even when I try to get message from non-existent queue or delete message with missing or invalid handle. When using aws from scripts knowing if delete succeeded or not would be nice to avoid looping same messages repeatedly in case of failures after visibility timeout passes.
Is there any good way to detect failures in these commands?
Odd error, I pulled down the latest version about an hour ago and have been getting this error:
[sgreen@db02 ~]$ perl aws --install
Missing right curly or square bracket at aws line 2766, at end of line
syntax error at aws line 2766, at EOF
Execution of aws aborted due to compilation errors.
I couldn't find an obvious missing curly or bracket. Thoughts?
the "copy" function is lossy - the key (everything following bucket) is truncated.
example:
aws copy telemetry-easodeasoddldefault /telemetry-easod-easod_dldefault/cd/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz
results in this PUT:
--request PUT --dump-header - --location 'https://telemetry-easodeasoddldefault.s3.amazonaws.com/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz? (etc)'
the /cd/ part of the key is gone.
so we moved from:
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easod-easod_dldefault/cd/msocial13-2012083016-13003.telem-2012083016170
cd/msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$
to
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easodeasoddldefault/msocial13-2012083016-13003.telem-2012083016170
msocial13-2012083016-13003.telem-20120830161702-ec2-174-129-33-195-448597a85ae04aa7ba0a4b2e7c2ec0a8.tsv.gz
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$
instead of
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$ aws ls -1 telemetry-easodeasoddldefault/cd/msocial13-2012083016-13003.telem-2012083016170
easod_dlfeature2:easod:i-0d5e6963:dev:[root@ec2-50-19-86-128 ~]$
As a feature request, it would be really nice to also be able to use this toolset on the new Amazon Glacier storage.
Currently ~/.awssecrets
is unencrypted, which is a pain. I propose:
~/.awssecrets.gpg
which is decrypted through GPG~/.authinfo
and ~/.netrc
(format TBD, also should support .gpg extension)aws
could talk that protocol, see http://www.kernel.org/pub/software/scm/git/docs/technical/api-credentials.html (the idea being that this supports secure credential storage outside the aws
codebase, simplifying things)If we specify aws put
operation with a single argument only it fails with the message:
root:~# aws put some-long-filename-with-version.1.0.2-and-releases-like.this.jar
sanity-check: Your system clock is 366 seconds ahead.
/usr/local/bin/aws: will not to read from terminal (use "-" for filename to force)
The message is a bit confusing in this case as it would better tell us about missing argument.
Probably some other commands behave in the same way.
When file is not found to be downloaded, there is no error returned and this is bad for scripting.
$ ./aws --secrets-file=/path/.awssecret --progress get bucket/path/file.zip /mnt/workspace/project/file.zip
404 Not Found
$ echo $?
0
According to S3 put-object documentation, we can attach custom metadata to an object with
--metadata key_name=string,key_name2=string
Is this something aws supports? I've try --metadata
, --meta-data
, --meta
, and --data
, all failed miserably with the mispelled meta parameter
error.
Hi again Tim,
Thanks for the speedy help last time, I've found another small issue and once again my lack of Perl skills is letting me down.
I have defined a bash function as follows
function notifyQueue {
./aws --silent --simple --region="${REGION}" send-message "${1}" -message "${2}" || { echo $?; echo "{\"error\":\"Could not notify queue\",\"id\":\"${POST_ID}\"}"; exit 1; }
echo $?
}
This works fine the majority of the time, but every so often I get the following response from your script
+--------+-----------------------+----------------------------------------------------------------------------------+
| Type | Code | Message |
+--------+-----------------------+----------------------------------------------------------------------------------+
| Sender | SignatureDoesNotMatch | The request signature we calculated does not match the signature you provided... |
+--------+-----------------------+----------------------------------------------------------------------------------+
When this happens I still get an error code of 0
returned.
I've noticed that in your documentation is is mentioned that you don't always return a non-0 value on error, but can you give me some pointers as to where I would need to detect this to get a non-0 value.
My use case is that after processing a message I need to send a message to a second queue, when that message fails I don't want to delete the current working message.
Thanks again.
Hi Tim,
Love the tool. So easy to use.
I would like to add RDS support.
Is this something you would like me to add in?
Any issues I should be aware of with adding in RDS support?
And I'd like to do it now!
warm regards
Rob
sudo curl https://raw.github.com/timkay/aws/master/aws -o /usr/bin/aws
[user@host ~]$ aws s3 ls
s3: unknown command
Is there any way to set IndexDocument for S3 buckets?
http://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTwebsite.html
Due to this magic:
if (/^(?:x-amz-|Cache-|Content-|Expires:|If-|Range:)/i)
you can't have files named, for example, if-i-only-had-a-brain.txt. (Or, in my case, dealing with an ill-advised base64 alphabet.)
Would adding a special argument '--' to indicate "treat the remaining argv as files" make it less surprising?
AWS IAM roles allow us to automatically fetch the access key and secret access key from a metadata endpoint:
http://169.254.169.254/latest/meta-data/iam/security-credentials/
These credentials expire after a set time (about 6 hrs).
Note that the credentials do require an extra "Token" variable.
Thanks for your work, this is a great program. I've been using it for years.
Every time I do a yum update and Amazon updates boto, it over writes the s3 commands (sym links) and I have to reinstall aws.
Just wondering if there's a way around this?
When I try to describe an image with "aws dim --headers ", I get "-H: mispelled meta parameter?"
I only need to get the "state" of the ami.
I'm using aws --fail head
to check whether a file exists in a bucket. It will hang with the --fail
option if the file already exists. I would expect to get an exit code of 0 if the file exists, and some other code if it doesn't
#!/bin/bash
# set some settings for the aws utility
export EC2_ACCESS_KEY=MYKEY
export EC2_SECRET_KEY=MYSECRET
export S3_DIR=MYBUCKET
FILE=a.test
# do not overwrite an existing file on aws of the same name
aws --fail head $FILE # this will hang if the file exists already
#aws head $FILE # this will not hang, but it doesn't give me a proper return code
if [[ $? == 0 ]]; then
echo "file already exists on aws" >&2
exit 1
fi;
echo "file doesn't exist yet"
describe-instances is missing field to show security group in which instances are running
it removes this function:-
-sub R53_xml_data {
- return { 'header' => '<?xml version="1.0" encoding="UTF-8"?>',
- 'POST|' => '<CreateHostedZoneRequest xmlns="https://route53.amazonaws.com/doc/2012-02-29/"><Name></Name><CallerReference></CallerReference><HostedZoneConfig><Comment></Comment></HostedZoneConfig></CreateHostedZoneRequest>',
- 'POST|rrset' => '<ChangeResourceRecordSetsRequest xmlns="https://route53.amazonaws.com/doc/2012-02-29/"> <ChangeBatch> <Comment></Comment> <Changes> <!--REPEAT--> <Change> <Action></Action> <ResourceRecordSet> <Name></Name> <Type></Type> <TTL></TTL> <ResourceRecords> <ResourceRecord> <Value></Value> </ResourceRecord> </ResourceRecords> </ResourceRecordSet> </Change> <!--/REPEAT--> </Changes> </ChangeBatch> </ChangeResourceRecordSetsRequest>',
-
- };
-}
Hi, I'm trying to use aws
for querying SimpleDB. When I get my results it looks something like:
+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Name | Attribute |
+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Domain | Name=Count Value=467263rO0ABXNyACdjb20uYW1hem9uLnNkcy5RdWVyeVByb2Nlc3Nvci5Nb3JlVG9rZW7racXLnINNqwMA?C0kAFGluaXRpYWxDb25qdW5jdEluZGV4WgAOaXNQYWdlQm91bmRhcnlKAAxsYXN0RW50aXR5SURa?AApscnFFbmFibGVkSQAPcXVlcnlDb21wbGV4aXR5SgATcXVlcnlTdHJpbmdDaGVja3N1bUkACnVu?aW9uSW5kZXhaAA11c2VRdWVyeUluZGV4TAANY29uc2lzdGVudExTTnQAEkxqYXZhL2xhbmcvU3Ry?aW5nO0wAEmxhc3RBdHRyaWJ1dGVWYWx1ZXEAfgABTAAJc29ydE9yZGVydAAvTGNvbS9hbWF6b24v?c2RzL1F1ZXJ5UHJvY2Vzc29yL1F1ZXJ5JFNvcnRPcmRlcjt4cAAAAAAAAAAAAAAO3s4AAAAAAQAA?AAAAAAAAAAAAAABwcHB4 |
+--------+---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
So I think the string starting rO0
is something to do with the NextToken
but if I copy/paste that entire string I'm told:
The specified next token is not valid.
So how does one deal with tokens?
With every version of aws
from commit 002baa1 onwards, when I run commands that rely on IAM role based authentication aws now no longer works.
Here is the error state with the latest version:
$ aws describe-tags --json --region=eu-west-1 --filter resource-id=i-obfuscated --sha1
{"Errors":{"Error":{"Message":"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.","Code":"SignatureDoesNotMatch"}},"RequestID":"6ec2d44c-3596-477e-b742-obfuscated"}
And here is the correct response when I revert to commit 7b8e99d
$ aws describe-tags --json --region=eu-west-1 --filter resource-id=i-obfuscated
{"xmlns":"http://ec2.amazonaws.com/doc/2013-10-15/","tagSet":{"item":{"aws:autoscaling:groupName":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Version":{"resourceType":"instance","value":"obfuscated","resourceId":"obfuscated"},"DataCenterID":{"value":"A","resourceType":"instance","resourceId":"obfuscated"},"aws:cloudformation:stack-id":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"aws:cloudformation:logical-id":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Roles":{"resourceId":"obfuscated","value":"obfuscated","resourceType":"instance"},"aws:cloudformation:stack-name":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"Name":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Environment":{"value":"obfuscated","resourceType":"instance","resourceId":"obfuscated"},"ConfigDecryptionKey":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"},"Branch":{"resourceId":"obfuscated","resourceType":"instance","value":"obfuscated"}}},"requestId":"5d495b51-3445-49cd-b529-obfuscated"}
ec2-create-tags $INSTANCE --tag Name=12345678901234567890123456789
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Code | Message |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| SignatureDoesNotMatch | The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details. |
+-----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Lengths other than 29 work, and lengths of 29 work with the official Java CLI tools.
AWS currently uses a bordered table format that requires enormous amounts of terminal window space to view, and (for example) returns a separate ASCII table for each EC2 instances in list-instances. The ASCII output could be substantially shorter and easier to read, designed for viewing within a monitor-sized terminal (120 columns? 160?) with a --verbose option to get everything.
I had encountered two situations where aws get hangs when downloading 13MB file. My script looks like this:
HCDIR="desktop/hc"
BUCKET="homersoft-dist"
AWS_GET="aws get --progress"
AWS_LS="aws ls -1"
TEMPDIR="/tmp/"
OBJECT="$HCDIR/$VERSION.zip"
if [ -n "$AWS_LS $BUCKET/$OBJECT
" ]; then
echo "# $OBJECT found"
$AWS_GET $BUCKET/$OBJECT $TEMPDIR
else
echo "# $OBJECT not found"
fi
I've been trying to utilize the --wait function to poll for when my EC2 instance comes up. However, it seems that, when I pass either the --wait function or the --simple option, all the output breaks and it does not wait/poll for my instance to come out of pending status. The AWS API call DOES work correctly, and my box still comes up, but I have to manually poll for status.
I'm using aws script version 1.75 on ubuntu 11.04. Shell output below.
Great script, thanks for making it available!
Not passing --wait or --simple works:
john@jump0$ aws run -n 1 -v --group cluster --key control --type m1.small --availability-zone us-east-1d ami-XXXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 9 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXXXXX&Action=RunInstances&Expires=2011-10-03T22%3A16%3A58Z&ImageId=ami-XXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+
| instanceId | imageId | instanceState | keyName | instanceType | launchTime | placement | kernelId | monitoring | stateReason | rootDeviceType | hypervisor |
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+
| i-XXXXX | ami-XXXX | code=0 name=pending | control | m1.small | 2011-10-03T22:16:28.000Z | availabilityZone=us-east-1d | aki-XXXXXX | state=disabled | code=pending message=pending | instance-store | xen |
+------------+--------------+---------------------+---------+--------------+--------------------------+-----------------------------+--------------+----------------+------------------------------+----------------+------------+
Passing --simple:
john@jump0$ aws run -n 1 -v --simple --group cluster --key control --type m1.small --availability-zone us-east-1d --simple ami-XXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 9 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXXXX&Action=RunInstances&Expires=2011-10-03T22%3A18%3A05Z&ImageId=ami-XXXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15
Passing --wait:
john@jump0$ aws run -n 1 -v --wait=10 --group cluster --key control --type m1.small --availability-zone us-east-1d --simple ami-XXXXX
aws version: v1.75 (ec2: 2010-11-15, sqs: 2009-02-01, elb: 2010-07-01, sdb: 2009-04-15, iam: 2010-05-08)
sanity-check: Your system clock is 10 seconds behind.
ec2(Action, RunInstances, MinCount, 1, MaxCount, 1, SecurityGroup.1, cluster, KeyName, control, InstanceType, m1.small, Placement.AvailabilityZone, us-east-1d, ImageId, ami-XXXXX)
data = GET\nec2.amazonaws.com\n/\nAWSAccessKeyId=XXXX&Action=RunInstances&Expires=2011-10-03T22%3A26%3A57Z&ImageId=ami-XXXX&InstanceType=m1.small&KeyName=control&MaxCount=1&MinCount=1&Placement.AvailabilityZone=us-east-1d&SecurityGroup.1=cluster&SignatureMethod=HmacSHA1&SignatureVersion=2&Version=2010-11-15
".awssecrets" in the readme should be ".awssecret".
As it is implemented right now multipart upload starts automatically if file >25GB (see line 1288) . I am guessing the intention was 5GB and its is just a typo in GB5 calculation.
I am great fan of aws. Keep up excellent work!
s3put returns exit code 0 when a file is not found which is kinda bad for scripting.
For example:
s3put BUCKET_NAME/ foobar.txt ; echo $?
Gives the following output:
curl: Can't open 'foobar.txt'! curl: try 'curl --help' or 'curl --manual' for more information 0
$ aws receive-message 55555555555/test
$ echo $?
0
With the above, there is a message already in the queue which doesn't get read. Same behavior on send
. Same behavior on send
even when I use IAM credentials that don't even have permissions to put
.
aws tells me it is at version v1.77
EDIT: mea culpa, as you can see above I missed a leading /
before the queue name, which led to a mangled URL, causing curl to exit with status 6, however that error code is unhandled so we never see it. Not a good situation, but not the bug I thought.
Feel free to close.
Hi Tim,
Thanks for the great bit of code. It's making my life significantly easier for a project I am working on.
Despite saying that, I am having a couple of issues with SQS. The first is with long polling (which i notice is a recent commit to master). Maybe I'm missing something, but after adding --wait=20
to the command line long-polling doesn't seem to be working. Have I got the right parameter and format, or am I doing something wrong?
My second issue is a bit more significant. I'm currently using --exec='system("processMessage", "$body"); $?;'
and despite the system call returning 0 (there is no message from your script saying otherwise), the message does not get deleted from SQS. Am I mis-reading the docs, are the docs wrong or is there something else?
Thanks in advance for your help.
Hello,
I didn't find the way to change the instance size and/or other instance attributes (like termination protection).
Is there one or did I miss it?
Thanks,
Timur
I get the following error when I try to access a S3 bucket in the new Frankfurt region:
307 Temporary Redirect
+----------------+----------------------------------------------------------------------------------------------+------------------+
| Code | Message | RequestId |
+----------------+----------------------------------------------------------------------------------------------+------------------+
| InvalidRequest | The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256. | 5899A2581461XXXX |
+----------------+----------------------------------------------------------------------------------------------+------------------+
Copies object BUCKET2/OBJECT2 to BUCKET1/OBJECT1 to fails with error
The specified key does not exist.
e.g
s3copy breinput/sample/tables/table1 breinput/sample/tables/t1.
I used the document from timkay.com/aws/ for the syntax.
Any help is appreciated.
I hope I'm missing something, but I can't find anything that documents meta-parameters. The only reason I even know about them is the "mispelled metaparameter" error, and by looking at the code, which isn't terrible helpful.
Latest versions of CentOS, RedHat Linux and Fedora have curl compiled with NSS, not OpenSSL. The aws command fails with:
sanity-check: Your curl doesn't seem to support SSL. Try using --http
The sanity check done using:
curl -q --cipher RC4-SHA -s --include https://connection.s3.amazonaws.com/test
fails (with an exit code 59 meaning "Couldn't use specified SSL cipher") because the cipher string for NSS is different than the string for OpenSSL or GnuTLS (which use "RC4-SHA"). For versions compiled with NSS the cipher string should be "rsa_rc4_128_sha". A working check would be:
curl -q --cipher rsa_rc4_128_sha -s --include https://connection.s3.amazonaws.com/test
I think the solution would be to test with RC4-SHA and retry with rsa_rc4_128_sha if it fails, or try to detect if curl uses NSS. The version command shows the library used:
curl --version
curl 7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3
for a version with OpenSSL and:
curl --version
curl 7.29.0 (x86_64-redhat-linux-gnu) libcurl/7.29.0 NSS/3.15.4 zlib/1.2.7 libidn/1.28 libssh2/1.4.3
for a version with NSS.
Regards,
MValdez.
AWS just released commands for managing tags on the instances.
ec2-create-tags, ec2-describe-tags, and ec2-delete-tags
Using the latest version, with the following command line syntax as an example:
/usr/bin/aws put "x-amz-server-side-encryption: AES256" ${BUCKET_NAME}/${PARTITION_DATE}/${FILE_NAME} ${UPLOAD_FILEPATH}
For files less than 5GB in size, which upload as a single part, the end state of the put to S3 is a file with server-side encryption enabled.
For files greater than 5GB in size, which the client automatically uploads via multi-part, the end state of the put to S3 is a file without server-side encryption enabled even though the "x-amz-server-side-encryption: AES256" header is specified on the put.
I can successfully upload a file greater than 5GB with SSE enabled using the following multi-part logic:
dd if=/dev/zero of=file1.img bs=1 count=0 seek=3G
dd if=/dev/zero of=file2.img bs=1 count=0 seek=3G
dd if=/dev/zero of=file3.img bs=1 count=0 seek=3G
./aws post "x-amz-server-side-encryption: AES256" ${BUCKET_NAME}/MyMpu?uploads
./aws put ${BUCKET_NAME}/MyMpu?part file1.img
./aws put ${BUCKET_NAME}/MyMpu?part file2.img
./aws put ${BUCKET_NAME}/MyMpu?part file3.img
./aws post ${BUCKET_NAME}/MyMpu?upload
In my specific use case, some files that are being uploaded are small, and some are very large, so I cannot easily divide all files up in to multi-part chunks using this logic. Should the client be able to support this automatically when it follows the multi-part code path for any individual file >= 5GB?
My findings show that PUTs from STDIN are not really streamed. It looks like "aws" first takes all data on STDIN, and after all data is received triggers curl to upload.
Is it possible to support real streaming? This would be great for stream modification (for example, do some inline encryption with gnupg)
So I was uploading a 15 GB file (and alternatively, a 6.5GB file) with the following sample command
~/aws put backup_bucket/diskimg.raw /mnt/img/diskimg.raw
it was able to upload the file parts and they show up in the bucket, but the command always shows a "405 Method Not Allowed" in the end.
For the 15GB file, it would upload 2~3 files of 5 GB each, then it shows a "405 Method Not Allowed". (the file parts along with the upload ids would remain in the bucket).
I initially think it's a permission issue, but after checking, I don't think they are.
The bucket grants full rights to the bucket owner and any authenticated user. Bucket policy is in the default setting and empty. CORS policy is in the default setting and shouldn't be applicable to me since I'm runing the command from an EC2 instance within the same account. Also, I'm not using any IAM roles (they are all empty).
In addition, and most importantly, I was able to upload any files less than 5 GB in size, as always, to the same bucket.
The aws command was already the latest version, v1.80.
I am trying to send a message to an sqs queue where the body looks like "folder/sub-folder/filename.txt"
The resulting message posted to SQS is encoded looking like "folder%2Fsub-folder%2Ffilename.txt
I don't want to have to change other downstream processing to handle the encoding and would really like if the aws api tool could send the message appropriately.
I attempted to use the --curl-options but to no avail.
Is it a bug or am I just being a numpty?
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.