jcberquist / aws-cfml Goto Github PK
View Code? Open in Web Editor NEWLucee/ColdFusion library for interacting with AWS API's
License: MIT License
Lucee/ColdFusion library for interacting with AWS API's
License: MIT License
Installed code and everything was working flawlessly yesterday on CF implementation. I have made no code changes and today it no longer works. Further, I am getting no errors just a blank page. If I comment out the aws() call my page loads. I am only using the S3 service. Any ideas on how to troubleshoot what is going on?
aws = new cfcs.aws( awsKey = '*****', awsSecretKey = '*****', defaultRegion = 'us-east-2' );// all_buckets = aws.s3.listBucket(delimiter='/',bucket='smartsys-website-images');
// all_buckets = all_buckets.data.CommonPrefixes;
//bucket = aws.s3.listBucket(delimiter='/',prefix='#url.p#/',bucket='smartsys-website-images');
//s3_images = bucket.data.Contents;
//s3_image_count = arraylen(s3_images);
Any help would be appreciated?
How would I use this to move a zip file to S3? And how would I get that file back from S3 to my filesystem?
By any chance do you have a sample using the S3.putObject request? I have a page that accepts an upload of a PDF from a client, reads the file contents in using fileReadBinary(), and I cannot figure out the correct usage of the putObject() call. I've tried passing the file contents raw, with toBinary, with toBase64, with toBinary and then toBase64, even tried various encodeForXML as the message I'm receiving back references an Bad Request - Invalid XML.
"S3RESPONSE":{
"responseHeaders":{
"Explanation":"Bad Request",
"Connection":"close",
"Transfer-Encoding":"chunked",
"Date":"Thu, 29 Aug 2019 20:01:21 GMT",
"Server":"AmazonS3",
"x-amz-id-2":"p3BuX1V4L/f0zexSJ+crfZxZQI1M+jj6XBlCm/KcCYj8lPeDPpgGHwkk4mGofUyavACP6H9c/iU=",
"Content-Type":"application/xml",
"Status_Code":400,
"x-amz-request-id":"FBA1B419A83083C3",
"Http_Version":"HTTP/1.1"
},
"rawData":"<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MalformedXML</Code><Message>The XML you provided was not well-formed or did not validate against our published schema</Message><RequestId>FBA1B419A83083C3</RequestId><HostId>p3BuX1V4L/f0zexSJ+crfZxZQI1M+jj6XBlCm/KcCYj8lPeDPpgGHwkk4mGofUyavACP6H9c/iU=</HostId></Error>",
"statusCode":400,
"responseTime":580.0,
"error":{
"Message":"The XML you provided was not well-formed or did not validate against our published schema",
"Code":"MalformedXML",
"RequestId":"FBA1B419A83083C3",
"HostId":"p3BuX1V4L/f0zexSJ+crfZxZQI1M+jj6XBlCm/KcCYj8lPeDPpgGHwkk4mGofUyavACP6H9c/iU="
}
}
Hi,
sorry to be posting this question here.. :)
I am using the s3 hookup of your library and I need to be able to get the size in MB or GB of a specific bucket. How would I do that?
Thanks in advance!
Art
hello, I tried to implement the dynamodb partiql in the library, as it seemed simpler to run queries.
but, I was not successful.
Could you implement a method that uses dynamodb partiql ? https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/ql-reference.html
Please advise.
Tag Context:
Tag: CF_CFPAGE
Template: /lib/ex-shared/model/utils/aws/com/utils.cfc
Line: 10
Tag: CF_TEMPLATEPROXY
Template: /lib/ex-shared/model/utils/aws/services/s3.cfc
Line: 25
Tag: CF_TEMPLATEPROXY
Template: /handlers/Download.cfc
Line: 216
Tag: CF_UDFMETHOD
Having some struggle here, the documentation is not worth and i read it many times but unable to move ahead due to the confusion, how can i make the seller partner api working this aws (either by using IAM or without IAM)
Hey John,
I'm sure this is not an issue so much as something I'm doing wrong & I would really appreciate any insight you could provide as I'm having the devil of a time trying to get to the bottom of it. I'm using your excellent library inside a function thusly:
try {
response = aws.s3.getObject(
Bucket = "#arguments.s3_bucket#",
ObjectKey = "#arguments.s3_file_key#"
);
if (response.statusCode eq 200) {
file_content = response.rawData;
file_disp = response.responseHeaders['Content-Disposition'];
file_type = response.responseHeaders['Content-Type'];
file_length = response.responseHeaders['Content-Length'];
cfheader(name="Content-Disposition" value=file_disp);
cfheader(name="Content-Type" value=file_type);
cfheader(name="Content-Length" value=file_length);
cfcontent(file=file_content reset="true" type=file_type);
return "Success";
} else {
return "S3 Download Error!";
}
} catch (any e) {
return "Error: #e.message#";
}
Authentication has already been done. Files look OK on S3 (They've been uploaded using your library) & will download successfully through the console.
It looks like it works OK & "success" is returned with the downloaded file - but the file content is corrupted (as in I can't open it). Do you have any idea why this might be? Do I need to process the rawData somehow? Any help would be very gratefully received. TIA
It looks like the official -light tagged docker containers have removed deprecated functions. encodeForURL() throws an error now in the the encodeUrl() util.
Hey John, I love using your ST3 plugin for ACF/Lucee, and your quick responses to issues that come up, so I'm intrigued about this project.
I'm currently looking for a way to monitor the health of my Lucee apps using CloudWatch Events, which is new to me. Looks like I need to use the put-events
AWS API call (https://docs.aws.amazon.com/cli/latest/reference/events/put-events.html)
Have you done this? I'm assuming this would be possible using your wrapper. Yes? No? Not sure? Thanks!
I see there's a method for creating signed s3 urls, but I would like to create signed cloudfront urls. I'm guessing there's an overlap here... would it be possible to use this library for this purpose?
The use case is that I have a cloudfront distribution that serves public and restricted content. I need all links to this content to go through cloudfront.
I'm trying to make sense of this set of AWS docs: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
Does anyone have a guide for how this works?
Need to upload to a S3 bucket and client will only provide PEM file.
Not really familiar with using these and having trouble finding the info online.
Lucee on Windows.
Already using aws-cfml for S3 with awsKey/awsSecretKey for credentials.
Can i just take the PEM file somehow and extract what i need for above,
or does it have to be installed on the server somehow (ugh),
or something else entirely?
This is more a question than an issue, depending on the answer..
Invoking s3.listBuckets()
, if there is one bucket the data.Buckets
value is a struct, if there is more than one, it's an array.
Is this the expected/desired behaviour?
Thanks!
I am trying to run this:
form.inString = "<speak><prosody pitch='medium' rate='medium'>Ginelle, Middens</prosody></speak>"
form.voice = "Joanna"
response = variables.aws.polly.synthesizeSpeech(
text = form.inString.replacelist("'", '"', "all"),
voiceid = form.voice,
textType = "ssml",
)
What I get in response is as if the SSML (XML) tags were not even there.
It is responding as if it were
speak prosody pitch medium rate medium Ginelle, Middens prosody speak
I have tried encodeForXML
and canonicalize
.
I wonder what it needs
Hi
For the secret manager usage:
response = aws.secretsmanager.getSecretValue( 'YourSecretId' );
secretValue = response.data.SecretString;
is YourSecretId the ARN of the secret and SecretString the Secret Key?
Suppose my values in secret manager as follows:
Secret key | Secret value
password | xxxxxxxxxxxx
YourSecretId == arn:aws:secretsmanager:ap-southeast-2:123456789012:secret:MyRDSSecret-C4t4tS
SecretString == password
response = aws.secretsmanager.getSecretValue( 'arn:aws:secretsmanager:ap-southeast-2:123456789012:secret:MyRDSSecret-C4t4tS' );
secretValue = response.data.password;
does that look about right?
I was fighting with a AWS signature does not match issue and narrowed it down to a specific line in coldfusion.cfc:20. Original line is:
var fullPath = utils.encodeUrl( path, false ) & ( !queryParams.isEmpty() ? ( '?' & utils.parseQueryParams( queryParams ) ) : '' );
Changing to:
var fullPath = path & ( !queryParams.isEmpty() ? ( '?' & utils.parseQueryParams( queryParams ) ) : '' );
fixes the issue.
I was posting a request to the execute-api service at AWS. My URI is:
https://someletters.execute-api.us-west-2.amazonaws.com/beta/@connections/a0abIc59PHcCJQQ=
I don't know whether this code support for MinIO or not.
I'm using Lucee 5.3.9.141 when I use listBuckets function it showed all the bucket list. When I try to getBucketAccess or putObject it showed Connection Failure with empty respond header.
I am trying to put a file to s3.wasabisys.com
instead of amazon. Their compatibility with Amazon is 100%.
Where is the code below can i specify the different end-point?
aws = new aws(
awsKey = 'YOUR_PUBLIC_KEY',
awsSecretKey = 'YOUR_PRIVATE_KEY',
defaultRegion = 'us-east-1'
);
zipFileData = fileReadBinary('myfile.zip');
aws.s3.putObject(bucket = 'mybucket', objectKey = 'xxxx', fileContent = zipFileData);
I'm not sure what that message means, but it can't pull the functions into the outline view which is REALLY handy for navigating large files. I'm using the kamasamasomething cfml plugin and that view is really helpful in coldbox programming.
Path style URLs are deprecated, so the S3.generatePresignedURL() method should support virtual host style urls.
Path style: https://s3-eu-west-1.amazonaws.com/foo/bar
Virtual host style: https://foo.s3-eu-west-1.amazonaws.com/bar
Workaround: call com.api.signedUrl() method directly with the correct host and path for virtual host style urls.
Hello All,
Does anyone have an example of how to use this plugin with the Amazon SP-API? I've tried a few things but have been unsuccessful.
Thanks!
I know it says TODO, but can you TODO it now? lol.
I am getting a signature mismatch and wondering if it's because I am not using this.
I guess it s not an issue with aws-cfml but i would be interested to see if this is normal behavior.
It takes about 1 second to upload 100kb to s3. I tested it with a 1,7mb image at first wich took about 18 seconds. I i do the same on the same server with the web interface of S3 it takes under 1 second.
Any experiences / ideas how to speed that up?
When an application already has a directory or mapping named services
, the constructor throws an cfc not found exception.
A servicesPath
argument would resolve this as below.
public struct function init(
string awsKey = '',
string awsSecretKey = '',
string defaultRegion = '',
struct constructorArgs = {},
struct httpProxy = {server: '', port: 80},
string servicesPath = 'services'
) {
this.api = new com.api(awsKey, awsSecretKey, defaultRegion, httpProxy);
for (var service in variables.services) {
if (StructKeyExists(arguments.constructorArgs, service)) {
StructAppend(variables.constructorArgs[service], arguments.constructorArgs[service]);
}
this[service] = new '#arguments.servicesPath#.#service#'(this.api, variables.constructorArgs[service]);
}
return this;
}
In my case, I'd instantiate it like so:
awscfml = new services.awscfml.aws(
awsKey = "foo",
awsSecretKey = "bar",
defaultRegion = "ap-southeast-2",
servicesPath = "app.services.awscfml.services"
);
Any ideas?
Here is the response:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>[REDACTED]</AWSAccessKeyId>
<StringToSign>AWS4-HMAC-SHA256 20171207T174452Z 20171207/us-east-1/s3/aws4_request 763b7c22df8a9fd0d780c11fd576dfcfb85fb5673eb9b1925f0bdfd607c046b7</StringToSign>
<SignatureProvided>0286f3f0236dcfd0f9b96a600bbcd3636693df6319fe19f77f91010c2dd07b3b</SignatureProvided>
<StringToSignBytes>41 57 53 34 2d 48 4d 12 43 2d 53 48 41 32 35 36 0a 32 30 31 37 31 32 30 37 54 31 37 34 34 35 32 5a 0a 32 30 31 37 31 32 30 37 2f 75 73 2d 65 61 73 74 2d 31 2f 73 33 2f 61 77 73 34 5f 72 65 71 75 65 73 74 0a 37 36 33 62 37 63 32 32 64 66 38 61 39 66 64 30 64 37 38 30 63 31 31 66 64 35 37 36 64 66 63 66 62 38 35 66 62 35 36 37 33 65 62 39 62 31 39 32 35 66 30 62 37 37 64 36 30 37 63 30 34 36 62 37</StringToSignBytes>
<CanonicalRequest>GET /resourceFile/2017/0362337b-fa9b-4e06-a393-aa87e91c71bc.82892e8d-079f-4c7c-b5d3-c683c55712dc.zip tagging=true host:my-bucket.s3.amazonaws.com x-amz-content-sha256:STREAMING-AWS4-HMAC-SHA256-PAYLOAD x-amz-date:20171207T171252Z host;x-amz-content-sha256;x-amz-date STREAMING-AWS4-HMAC-SHA256-PAYLOAD</CanonicalRequest>
<CanonicalRequestBytes>47 45 54 0a 2f 72 65 73 6f 75 72 63 65 46 69 6c 65 2f 32 30 31 37 2f 30 33 36 32 33 33 37 62 2d 66 61 39 62 2d 12 65 30 36 2d 61 33 39 33 2d 61 61 38 37 65 39 31 63 37 31 62 63 2e 38 32 38 39 32 65 38 64 2d 30 37 39 66 2d 34 63 37 63 2d 62 35 64 33 2d 63 36 38 33 63 35 35 37 31 35 64 63 2e 7a 69 70 0a 74 61 67 67 69 6e 67 3d 74 72 75 65 0a 68 6f 73 74 3a 65 64 65 78 66 65 2d 70 75 62 6c 69 63 2d 64 65 76 2e 73 33 2e 61 6d 61 7a 6f 6e 61 77 73 2e 63 6f 6d 0a 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3a 53 54 52 45 41 4d 49 4e 47 2d 41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 2d 50 41 59 4c 4f 41 44 0a 78 2d 61 6d 7a 2d 64 61 74 65 3a 32 30 31 37 31 32 30 37 54 31 37 34 34 35 32 5a 0a 0a 68 6f 73 74 3b 78 2d 61 6d 7a 2d 63 6f 6e 74 65 6e 74 2d 73 68 61 32 35 36 3b 78 2d 61 6d 7a 2d 64 61 74 65 0a 53 54 52 45 41 4d 49 4e 47 2d 41 57 53 34 2d 48 4d 41 43 2d 53 48 41 32 35 36 2d 50 41 59 4c 4f 41 44</CanonicalRequestBytes>
<RequestId>C1E5F6C594F122D8</RequestId>
<HostId>0O1AJDyY8Vz8NIGvY4ExNdgI5JA0cFks9D12HuznsgIXQkHG4Kt02MMEetTpo0CyLVk3Et8zUT0=</HostId>
</Error>
Files that are over 5Gb cannot be uploaded to S3, for this you need to split them and upload them using the multipart upload process.
This ticket is to capture those additions. (see https://docs.aws.amazon.com/AmazonS3/latest/userguide/mpuoverview.html)
Hi there, we're using your library in this project: https://github.com/pixl8/s3-commandbox-commands - many thanks for the hard work! :)
We're pulling in your project as a commandbox dependency and simply pointing to your github repo. The issue with this is that, when you change the code, our project can then break (as it did with the recent rename of the 'region' argument to the API).
Simple tagging with git would be enough for us to be able to peg the dependency to a specific version - but you could consider registering the project with forgebox also and releasing versions there also.
Anyways, its not a huge deal - but would be helpful for others I think. Happy to help getting setup to push to forgebox if wanted too.
Regards,
Dominic
We have JSON that can have a variety of arbitrary schemas, and it could change significantly over time. It seems, at least compared to MongoDB, that DynamoDB is not very good at handling arbitrary JSON formats, but is more suited for formats that you have complete control over. Would you agree?
Or do you have more samples of dealing with arbitrary JSON in various complex formats?
Thank you!
No matter what I do I am getting SignatureDoesNotMatch as a response. We are initially just trying to use the sts service to get an access token. If I run it via Postman it works, same with CURL. @jcberquist are you open to consulting, would be happy to pay for your time.
Hi,How to connect the db in local
Hi
I am currently using getObject but do not know how to load the s3 file on my website for view. ?
Hi. Is it possible to use the generatePresignedURL function to link to a version of an object? I have tried adding the s3 object version ID to the query string (e.g. https://s3-eu-west-2.amazonaws.com//?versionId=&...) but it generates an error: 'The request signature we calculated does not match the signature you provided. Check your key and signing method.' I also tried adding it as the X-Amz-Version-Id header but this seems to be ignored. Any help would be greatly appreciated.
Thanks
Richard
Thanks for this code! I'm trying to use it on a CF10 project with DynamoDB. I modified the code locally to work around CF11-specific code, but am stuck on the parseQueryParams() method in the utils.cfc.
If you could write that method to work work in CF10, I'd be happy to share the rest of my code that works with cf10. I wish I could upgrade, but it's not an option at this time.
Hello,
I am trying to generate a presigned PUT URL where the file has public-read acl after it is uploaded. Googling around the different SDKs I do see it is possible, but I cannot seem to figure out the spot in this library to add it.
I was able to upload the file just fine with the correct content type by taking the generatePresignedURL function and changing the verb to PUT.
Any help would be appreciated. If I do get it figured out I will report back.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.