Coder Social home page Coder Social logo

aws-elixir's Introduction

AWS clients for Elixir

Actions Status Module Version Hex Docs Total Download License Last Updated

๐ŸŒณ With this library you can have access to almost all AWS services without hassle. โšก

Features

  • A clean API separated per service. Each module has a correspondent service.
  • Support for most of AWS services.
  • Configurable HTTP client and JSON parser.
  • Generated by aws-codegen using the same JSON descriptions of AWS services used to build the AWS SDK for Go V2.
  • Documentation is updated from the official AWS docs.

Usage

Here is an example of listing Amazon Kinesis streams. First you need to start a console with iex -S mix.

After that, type the following:

iex> client = AWS.Client.create("your-access-key-id", "your-secret-access-key", "us-east-1")
iex> {:ok, result, resp} = AWS.Kinesis.list_streams(client, %{})
iex> IO.inspect(result)
%{"HasMoreStreams" => false, "StreamNames" => []}

If you are using AWS.S3, you can upload a file with integrity check doing:

iex> client = AWS.Client.create("your-access-key-id", "your-secret-access-key", "us-east-1")
iex> file = File.read!("./tmp/your-file.txt")
iex> md5 = :crypto.hash(:md5, file) |> Base.encode64()
iex> AWS.S3.put_object(client, "your-bucket-name", "foo/your-file-on-s3.txt",
  %{"Body" => file, "ContentMD5" => md5})

Note that you may need to specify the ContentType attribute when calling AWS.S3.put_object/4. This is because S3 will use that to store the MIME type of the file.

You can also upload to S3 as multipart. If you're facing timeout issues, this strategy is recommended:

client = AWS.Client.create("your-access-key-id", "your-secret-access-key", "us-east-1")
bucket = "your-bucket-name"
filename = "./your-big-file.wav"
# AWS minimum chunk size is 5MB
chunk_size = 5_242_880

# Create the multipart request
{:ok,
 %{
   "InitiateMultipartUploadResult" => %{
     "UploadId" => upload_id
   }
 }, _} = AWS.S3.create_multipart_upload(client, bucket, filename, %{})

file = File.read!(filename)

# Send the file's binary in parts
parts =
  file
  |> String.codepoints()
  |> Stream.chunk_every(chunk_size)
  |> Stream.with_index(1)
  |> Enum.map(fn {chunk, i} ->
    chunk = Enum.join(chunk)

    {:ok, nil, %{headers: headers, status_code: 200}} =
      AWS.S3.upload_part(client, bucket, filename, %{
        "Body" => chunk,
        "PartNumber" => i,
        "UploadId" => upload_id
      })

    {_, etag} = Enum.find(headers, fn {header, _} -> header == "ETag" end)

    %{"ETag" => etag, "PartNumber" => i}
  end)


input = %{"CompleteMultipartUpload" => %{"Part" => parts}, "UploadId" => upload_id}

# Complete the multipart request
AWS.S3.complete_multipart_upload(aws_client, bucket, filename, input)

You can also list objects in a bucket:

# create the client just like the example above
iex> AWS.S3.list_objects_v2(client, "bucket-name-here")

And download a specific object:

# create the client just like the example above
# object key is the "file path" in the S3 bucket
iex> AWS.S3.get_object(client, "bucket-name-here", "object-key-here")

Check all S3 related functions in the docs here.

Remember to check the operation documentation for details: https://docs.aws.amazon.com/

Installation

Add :aws to your list of dependencies in mix.exs. It also requires hackney for the default HTTP client. Optionally, you can implement your own (Check AWS.Client docs).

def deps do
  [
    {:aws, "~> 1.0.0"},
    {:hackney, "~> 1.18"}
  ]
end

Run mix deps.get to install.

Development

Most of the code is generated using the aws-codegen library from the JSON descriptions of AWS services provided by Amazon. They can be found in lib/aws/generated.

Code outside lib/aws/generated is manually written and used as support for the generated code.

Documentation

Online

Local

  • Run mix docs
  • Open docs/index.html

Note: Arguments, errors and response structure can be found by viewing the model schemas used to generate this module at aws-sdk-go/models/apis/<aws-service>/<version>/.

An example is aws-sdk-go/models/apis/rekognition/2016-06-27/api-2.json. Alternatively you can access the documentation for the service you want at AWS docs page.

Tests

$ mix test

Release

  • Make sure the CHANGELOG.md is up-to-date and reflects the changes for the new version.
  • Bump the version here in the README.md and in mix.exs.
  • Run git tag v$VERSION to tag the version that was just published.
  • Run git push --tags origin master to push tags to Github.
  • Run mix hex.publish to publish the new version.

Copyright & License

Copyright (c) 2015 Jamshed Kakar [email protected]

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

aws-elixir's People

Contributors

4knahs avatar ahamez avatar andrewhr avatar brentonannan avatar cblavier avatar cheese-head avatar coryodaniel avatar doerge avatar enkr1 avatar github-actions[bot] avatar hashnuke avatar jfacorro avatar jkakar avatar josevalim avatar kianmeng avatar lgraebin avatar ltd avatar onno-vos-dev avatar pauloancheta avatar pdgonzalez872 avatar philss avatar pranavraja avatar qgadrian avatar sgrowe avatar smanolloff avatar vitorlimadev avatar vrcca avatar web-flow avatar wojtekmach avatar xprazak2 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-elixir's Issues

Thoughts on making dependencies adaptable?

So, Poison and HTTPoison are dependancies - any thoughts on making those adaptable with behaviours? I noticed that the library still uses a pre-1.0 version of httpoison, and I couldn't help but wonder if the library really needs to be dependent on it all.

There's a lot of options for JSON decoders, Poison, Jason, Jiffy, etc. They all basically have the same interface, making that adaptable could be as simple as plugging in a module name as part of client configuration.

I think it'd be simple enough to do the same for HTTP functionality - a bit more complicated to write an adapter, though. But probably still fairly straightforward with everything being generated.

The biggest issue would be that all HTTP requests return the HTTPoison structs, rather than say, an AWS.Response struct - changing that means a breaking change. That's... a harder pill to swallow.

Error when providing multiple querystring parameters

Description

When providing multiple querystring parameters (e.g. to AWS.S3.list_objects/8) the response from AWS is the error The request signature we calculated does not match the signature you provided. Check your key and signing method..

How to reproduce

Run the following with valid values for bucket and prefix:

AWS.S3.list_objects(AWS.Client.create, "bucket", "/", nil, nil, nil, "prefix")

The result is an error specifying that The request signature we calculated does not match the signature you provided. Check your key and signing method..

Root cause

The querystring parameters are sorted in reverse order. The request sends ?prefix=prefix&delimiter=%2F and the expected querystring as reported by the error returned from AWS should be ?delimiter=%2F&prefix=prefix.

no function clause matching in AWS.Request.build_params/2

Hello! Thank you for this lib. I was looking to migrate over from ex_aws but had problems getting S3 working. It wasn't on 0.6.0 so I used the main branch and am getting this error:

** (FunctionClauseError) no function clause matching in AWS.Request.build_params/2
   (aws 0.6.0) lib/aws/request.ex:85: AWS.Request.build_params([{"ACL", "x-amz-acl"}, {"CacheControl", "Cache-Control"}, {"ContentDisposition", "Content-Disposition"}, {"ContentEncoding", "Content-Encoding"}, {"ContentLanguage", "Content-Language"}, {"ContentLength", "Content-Length"}, {"ContentMD5", "Content-MD5"}, {"ContentType", "Content-Type"}, {"Expires", "Expires"}, {"GrantFullControl", "x-amz-grant-full-control"}, {"GrantRead", "x-amz-grant-read"}, {"GrantReadACP", "x-amz-grant-read-acp"}, {"GrantWriteACP", "x-amz-grant-write-acp"}, {"ObjectLockLegalHoldStatus", "x-amz-object-lock-legal-hold"}, {"ObjectLockMode", "x-amz-object-lock-mode"}, {"ObjectLockRetainUntilDate", "x-amz-object-lock-retain-until-date"}, {"RequestPayer", "x-amz-request-payer"}, {"SSECustomerAlgorithm", "x-amz-server-side-encryption-customer-algorithm"}, {"SSECustomerKey", "x-amz-server-side-encryption-customer-key"}, {"SSECustomerKeyMD5", "x-amz-server-side-encryption-customer-key-MD5"}, {"SSEKMSEncryptionContext", "x-amz-server-side-encryption-context"}, {"SSEKMSKeyId", "x-amz-server-side-encryption-aws-kms-key-id"}, {"ServerSideEncryption", "x-amz-server-side-encryption"}, {"StorageClass", "x-amz-storage-class"}, {"Tagging", "x-amz-tagging"}, {"WebsiteRedirectLocation", "x-amz-website-redirect-location"}], <<255, 216, 255, 224, 0, 16, 74, 70, 73, 70, 0, 1, 1, 1, 0, 75, 0, 75, 0, 0, 255, 219, 0, 67, 0, 16, 11, 12, 14, 12, 10, 16, 14, 13, 14, 18, 17, 16, 19, 24, 40, 26, 24, 22, 22, 24, 49, 35, 37, 29, ...>>)
    (aws 0.6.0) lib/aws/s3.ex:4515: AWS.S3.put_object/5

Let me know if you need anything else. Thank you!

`AWS.S3.put_object` doesn't work with mp3 file

I've tried to put mp3 file with code as shown in the example, so something like

iex> client = AWS.Client.create(..., ..., ...)
iex> file =  File.read!("path_to_mp3_file")
iex> md5 = :crypto.hash(:md5, file) |> Base.encode64()
iex> AWS.S3.put_object(AWSUtils.client(), bucket, path, %{
      "Body" => file,
      "ContentMD5" => md5
    })

The response was with status code 200:

%{
   body: "",
   headers: [...],
   status_code: 200
 }}

However, when I reach the expected URL I got an error from S3

This page contains the following errors:
error on line 1 at column 1: Document is empty
error on line 1 at column 1: Encoding error

P.S. I tried to use ExAws.S3.put_object and everything worked fine.

AWS.SQS.receive_message => bad character error

Good lib! Thanks for that.

I've got a problem and am wondering how to go about debugging it (for all I know this could be on AWS' side).

  def client() do
    AWS.Client.create(@access_key_id, @secret_access_key, @region)
  end

  def receive_message(queue_url \\ @queue_url) do
    client()
    |> AWS.SQS.receive_message(%{"QueueUrl" => queue_url}, %{})
  end

Calling AwsUtils.Worker.receive_message() works most times but, from time to time, I get this error:

12:16:31.250 [error] 3432- fatal: {:error, {:wfc_Legal_Character, {:error, {:bad_character, 339}}}}

 
โ–ถโ–ถโ–ถ
** (exit) {:fatal, {{:error, {:wfc_Legal_Character, {:error, {:bad_character, 339}}}}, {:file, :file_name_unknown}, {:line, 1}, {:col, 13520}}}
    (xmerl 1.3.28) xmerl_scan.erl:4127: :xmerl_scan.fatal/2
    (xmerl 1.3.28) xmerl_scan.erl:2721: :xmerl_scan.scan_char_data/5
    (xmerl 1.3.28) xmerl_scan.erl:2633: :xmerl_scan.scan_content/11
    (xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12
    (xmerl 1.3.28) xmerl_scan.erl:2608: :xmerl_scan.scan_content/11
    (xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12
    (xmerl 1.3.28) xmerl_scan.erl:2608: :xmerl_scan.scan_content/11
    (xmerl 1.3.28) xmerl_scan.erl:2136: :xmerl_scan.scan_element/12

Has anyone ever seen this?

Thanks!

NXDOMAIN - endpoint

Hi,

i am having issues with connection to Kinesis. It looks like the endpoint you have as a default is wrong.
Same problem was at Boto library. amazonaws.com is wrong.

But I can't find a way to override this endpoint at Client initialisation.

client = %AWS.Client{access_key_id: "secret",
                     secret_access_key: "secret",
                     region: "eu-west-1",
                     endpoint: "kinesis.eu-west-1.amazonaws.com"}


AWS.Kinesis.list_streams(client, %{})

Returns:

{:error, %HTTPoison.Error{id: nil, reason: :nxdomain}}

Best,
Tomaz

AWS.MediaLive.list_channels/1 returns a status_code: 403 response

To reproduce:

client = AWS.Client.create(...)
client |> AWS.MediaLive.list_channels()

This is because lib/aws/request.ex always sends a payload, even for :get requests:

    payload =
      if send_body_as_binary? do
        Map.fetch!(input, "Body")
      else
        encode!(client, metadata.protocol, input)
      end

This causes an empty JSON object {} to be sent in the HTTP request body, which gets rejected by the AWS API with 403.
Changing the payload to empty string for :get requests fixes the issue.

Happy to raise a PR to fix this, just not sure exactly where in the code would be most suited to add this condition.

Implement automatic retries in aws-elixir

Description

In aws-erlang myself and a colleague implemented automatic retries as part of aws-beam/aws-erlang#57. This may be a feature of interest for aws-elixir as well but seeing as I'm not super-duper-familiar with this codebase (nor Elixir for that matter) I'm opening an issue instead in case anybody else feels like picking this up :-) It could be a nice beginners task.

Criteria

  • Implement retry mechanism in aws-elixir using exponential backoff with jitter.
  • Ensure backwards compatibility, if no retry opts are passed, no retry mechanism should be applied.
  • Don't make the same mistake we made in onno-vos-dev/aws-erlang@5f4f9d9 and ensure all error cases are correctly handled. ๐Ÿ˜…

Example implementation: onno-vos-dev/aws-erlang@7078533

Can't specify timestream query endpoint

I'm attempting to use this library send queries to timestream. Timestream has a bit of a quirk in that queries against it must be executed on a seperate endpoint that is looked up via a describe API:

https://docs.aws.amazon.com/timestream/latest/developerguide/Using-API.endpoint-discovery.describe-endpoints.implementation.html

Now, the AWS.TimestreamQuery.describe_endpoints method does exist and I can pull the right info from there, but I can't quite figure out how to insert that back into the client for use in the actual query request. The crux of the issue seems to be AWS.Request.build_host which performs one of two functions:

  1. If the region is "local" take whatever endpoint is set in the client.
  2. If the region is anything else combine a prefix taken from some generated metadata with the endpoint specified in the client.

Neither of these work for me. The metadata generated by AWS.TimestreamQuery doesn't refer to the discovered endpoint (which I need to provide) but rather to a fixed point in AWS.

Is there a good way around this? Or, alternatively, a good path forward that I can go implement? Perhaps something that could be added to the supplied options to give me an out to specify an alternative endpoint?

AWS.S3.complete_multipart_upload(client, bucket, key, %{}) returns an error

Hello Folk's

Nice lib! :-)

i had an error when try the s3 partial_upload part

the code:

# ...
{:ok, %{
   "InitiateMultipartUploadResult" => %{
     "UploadId" => key
    }}, _} = AWS.S3.create_multipart_upload(client, bucket, path, %{})

    Stream.concat([head], rest)
      |> Stream.chunk_every(chunk_size)
      |> Enum.map(fn chunk ->
      chunk_s =
        chunk
        |> Enum.join("\r\n")
      size = :erlang.byte_size(chunk_s)
      IO.inspect( size / @megabyte)
      AWS.S3.upload_part(client, bucket, key, %{"Body" => chunk_s, "ContentMD5" => :crypto.hash(:md5, chunk_s) |> Base.encode64()})
      |> IO.inspect()
    end)

    AWS.S3.complete_multipart_upload(client, bucket, key, %{})

error

{:error,
 {:unexpected_response,
  %{
    body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><Method>POST</Method><ResourceType>OBJECT</ResourceType><RequestId>EDCQXBK7XAH5FRKH</RequestId><HostId>bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo=</HostId></Error>",
    headers: [
      {"x-amz-request-id", "EDCQXBK7XAH5FRKH"},
      {"x-amz-id-2",
       "bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo="},
      {"allow", "HEAD, DELETE, GET, PUT"},
      {"content-type", "application/xml"},
      {"transfer-encoding", "chunked"},
      {"date", "Mon, 26 Jul 2021 21:52:17 GMT"},
      {"server", "AmazonS3"}
    ],
    status_code: 405
  }}}
** (Protocol.UndefinedError) protocol Enumerable not implemented for {:error, {:unexpected_response, %{body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<Error><Code>MethodNotAllowed</Code><Message>The specified method is not allowed against this resource.</Message><Method>POST</Method><ResourceType>OBJECT</ResourceType><RequestId>EDCQXBK7XAH5FRKH</RequestId><HostId>bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo=</HostId></Error>", headers: [{"x-amz-request-id", "EDCQXBK7XAH5FRKH"}, {"x-amz-id-2", "bD0jm08lboSt4yjEOsNxXTrxGb/i3rBdNATN+uD+mgTZ+F0w64aFddXj6Vthcrc4ClD8DBX0buo="}, {"allow", "HEAD, DELETE, GET, PUT"}, {"content-type", "application/xml"}, {"transfer-encoding", "chunked"}, {"date", "Mon, 26 Jul 2021 21:52:17 GMT"}, {"server", "AmazonS3"}], status_code: 405}}} of type Tuple. This protocol is implemented for the following type(s): Function, MapSet, List, Stream, HashDict, GenEvent.Stream, Map, Date.Range, Range, File.Stream, IO.Stream, HashSet
    (elixir 1.12.0) lib/enum.ex:1: Enumerable.impl_for!/1
    (elixir 1.12.0) lib/enum.ex:141: Enumerable.reduce/3
    (elixir 1.12.0) lib/stream.ex:649: Stream.run/1

some gotchas here?

I can't see the documentation

Where can I find an online documentation for it?

Thank You,

Tiago.

MIX_ENV=docs mix docs
** (Mix.Config.LoadError) could not load config config/docs.exs

** (Mix) The task "docs" could not be found. Did you mean "do"?

Presigning S3 get object URI

Hey I was trying to use the lib to return a presigned url to access a file from inside a S3 bucket.

I was able to accomplish the task by following this docs https://docs.aws.amazon.com/AmazonS3/latest/API/sigv4-query-string-auth.html#query-string-auth-v4-signing-example

However I couldn't use AWS.Signature.sign_v4_query/6 because it always tries to hash to payload and for requests without a payload or when you don't know which payload will be used you have to include UNSIGNED_PAYLOAD in the canonical URL instead.

Here's how I did it https://gist.github.com/ceolinrenato/cc7f036ef7867c4ccd08ddfd932d1520

IoT - shadow error

Hi,

I am trying to get through the configuration of this package. To get to the point where I would be able to get and update the devices shadow, through the REST API. For easier development, I added my self a permission where I has access to all iot:* resources.

In my config/dev.exs I have a confguration like in Readme.md

iex> client = %AWS.Client{access_key_id: "<access-key-id>",
                     secret_access_key: "<secret-access-key>",
                     region: "us-east-1",
                     endpoint: "amazonaws.com"}

Then, if i issue a code to initialize this client, I get back response like

%AWS.Client{access_key_id: "<secret>", endpoint: "amazonaws.com",
 port: "443", proto: "https", region: "eu-west-1",
 secret_access_key: "<secret>", service: nil}

I can even issue a call to list all the things. I get back the result with all the things listed.

BUT: When I want to get back the shadow of one of the devices, I get back the error.

I call the shadow with

AWS.IoT.DataPlane.get_thing_shadow(Shadow.Client.init_client(), "<thingName>")

init_client is just my function... nothing special.
And I get back {:error, "Not Found"} though if I go and search it over the web console I can see it there.

Specifying queue name in SQS send_message

How I can specify my queue name in the url of the request?

I couldn't find the parameter that accepts the queue name in the method.

https://hexdocs.pm/aws/AWS.SQS.html#send_message/3
send_message(client, input, options \\ [])

    client = AWS.Client.create("access_key", "client_secret", "us-east-1")

    msg = %{
      "MessageBody" => Jason.encode!(%{some_key: "message"})
    }

    # Where to put queue_name?
    client |> AWS.SQS.send_message(msg)

Its sending POST to https://sqs.us-east-1.amazonaws.com:443/. Instead we need to send it to the queue url.
https://sqs.us-east-1.amazonaws.com:443/account_number/queue_name

AWS.Lambda.invoke_async 403 with signature mismatch

When we try to invoke a Lambda, we get a 403 error:

client = %AWS.Client{
  access_key_id: Keyword.fetch!(config, :access_key_id),
  secret_access_key: Keyword.fetch!(config, :secret_access_key),
  region: Keyword.fetch!(config, :region)
}
    
function_name = "arn:aws:lambda:ca-central-1:3456789:function:foo_bar"

AWS.Lambda.invoke_async(client, function_name, %{foo: "bar"})
{:error,
 {:unexpected_response,
  %{
    body: "{\"message\":\"The request signature we calculated does not match the signature you provided.
      Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\"}",
    headers: [
      {"Date", "Tue, 15 Jun 2021 06:08:39 GMT"},
      {"Content-Type", "application/json"},
      {"Content-Length", "192"},
      {"Connection", "keep-alive"},
      {"x-amzn-RequestId", "a1e8e46f-8024-40e9-9c48-5091cb6c4ea1"},
      {"x-amzn-ErrorType", "InvalidSignatureException"}
    ],
    status_code: 403
  }}}

I can't understand what I could possibly mess up here, since there is not a lot of complexity in the invoke_async/3 call?

Any ideas?

Update hex package, which currently points to jkakar/aws-elixir

The title should be self explanatory, but when running mix deps.get on a project that uses :aws ~> 0.5.0 as a dependency, the old jkakar/aws-elixir is fetched instead of aws-beam/aws-elixir.

This may cause dependency conflicts. For instance, I need httpoison ~> 1.5, but jkakar/aws-elixir specify older dependency versions (in this case httpoison ~> 0.11.1).

Release 0.8.0?

There are quite a few changes since the last release, notably the fix for #71 which is required to support OTP 24. Could we have a new release published soon, please?

Wrong parameter name in Mturk error messages

The error messages in mturk are always nil. Looking at the actual HTTPoison response from create_additional_assignments_for_h_i_t\3:

%HTTPoison.Response{
    body:
      "{\"__type\":\"ParameterValidationError\",\"Message\":\"The value 0 is invalid for MaxAssignmentIncrement. Valid values range from 1 to 1000000000. (1523017679385 s)\",\"Parameter\":\"MaxAssignmentIncrement\",\"TurkErrorCode\":\"AWS.ParameterOutOfRange\"}",
    headers: [
      {"x-amzn-RequestId", "fd299fbe-5dc7-4f08-8ed3-1c79b545725a"},
      {"Content-Type", "application/x-amz-json-1.1"},
      {"Content-Length", "238"},
      {"Date", "Fri, 06 Apr 2018 12:27:59 GMT"},
      {"Cneonction", "close"}
    ],
    status_code: 400
  }

and looking at the code it seems that the error message field is wrong since it should start with a capital:

{:ok, response=%HTTPoison.Response{body: body}} ->
        error = Poison.Parser.parse!(body)
        exception = error["__type"]
        message = error["message"]
        {:error, {exception, message}}

It should be message = error["Message"] instead.

Access calls for keywords expect the key to be an atom, got: "X-Amz-Function-Error"

I'm on Elixir 1.6.4, using 0.5.0 of AWS.

I'm trying to use the Lambda portion of this library, and on invoking my function, I'm getting the following error:

Access calls for keywords expect the key to be an atom, got: "X-Amz-Function-Error"

It appears to be happening on line 299 here:

case request(client, :post, url, headers, input, options, nil) do
{:ok, body, response} ->
if !is_nil(response.headers["X-Amz-Function-Error"]) do
body = %{body | "FunctionError" => response.headers["X-Amz-Function-Error"]}
end
if !is_nil(response.headers["X-Amz-Log-Result"]) do
body = %{body | "LogResult" => response.headers["X-Amz-Log-Result"]}
end
{:ok, body, response}
result ->
result
end
end

Headers are coming back like this:

[
    {"Date", "Fri, 20 Apr 2018 01:12:50 GMT"},
    {"Content-Type", "application/json"},
    {"Content-Length", "6403"},
    {"Connection", "keep-alive"},
    {"x-amzn-RequestId", "f13a4d8f-4437-11e8-934b-0d4ffde93ad3"},
    {"x-amzn-Remapped-Content-Length", "0"},
    {"X-Amz-Executed-Version", "$LATEST"},
    {"X-Amzn-Trace-Id", "root=1-5ad93e91-a1770ed8f4eed468e857e018;sampled=0"}
  ]

Access expects the first item in each tuple to be an atom and won't work w/strings.

The function is being invoked correctly, and the results I expect are coming back from the invocation, but then the function is dying on that error.

Let me know if I can provide any more information/help.

Unknow error on aws iot

Hi ! I'm a newbie on elixir so maybe it's an easy problem ^^ I'm trying to create a policy in iot core so:
I'm trying to use the fuction create_policy(client, "policyName", input) but I don't found an example to know how I have to write input correctly, I have an issue {error,nil} and except the input format I don't know what can be the problem because I can make a get_policy(client, "policyName") and it's working.

So if someone can send me an example of the fonction create_policy it can be great !

Thanks and have a nice day.

Any HEAD requests such as `AWS.S3.head_object/4` fails

The AWS.HTTPClient logic doesn't handle HEAD HTTP calls (in this case using AWS.S3.head_object/4):

** (exit) an exception was raised:
    ** (CaseClauseError) no case clause matching: {:ok, 200, [REDACTED]}
        (aws 0.7.0) lib/aws/request.ex:98: AWS.Request.request_rest/9

This is due to this logic in AWS.HTTPClient:

    case :hackney.request(method, url, headers, body, options) do
      {:ok, status_code, response_headers, body} ->
        {:ok, %{status_code: status_code, headers: response_headers, body: body}}

      error ->
        error
    end

A better approach would be to explicitly listen for the error tuples to prevent unexpected return values:

    case :hackney.request(method, url, headers, body, options) do
      {:ok, status_code, response_headers, body} ->
        {:ok, %{status_code: status_code, headers: response_headers, body: body}}

      {:error, reason} ->
        {:error, reason}
    end

The specs for http client expects body to be a binary, but I think either the body should be optional, or it should be permit nil value:

    case :hackney.request(method, url, headers, body, options) do
      {:ok, status_code, response_headers, body} ->
        {:ok, %{status_code: status_code, headers: response_headers, body: body}}

      {:ok, status_code, response_headers} ->
        {:ok, %{status_code: status_code, headers: response_headers}}
        # {:ok, %{status_code: status_code, headers: response_headers, body: nil}}

      {:error, reason} ->
        {:error, reason}
    end

Potential breaking changes not documented on CHANGELOG.md

Hi all, just wanted to raise up an issue where a potential breaking changes is not documented in CHANGELOG.md.

In short, we found out that the error response of v0.5.0 is different compared to v.0.9.2 when attempting to update our dependencies:

# v0.5.0
{:error, {exception, message}}

# v0.9.2
{:error, {:unexpected_response, response}}

After some deep dive, here's the file changes related:

It seems like the changes happen after v0.7.0 according to this commit.

Am I right in regards of this as a breaking changes? If so, would it be good for us to update the CHANGELOG.md to document the breaking changes?

Happy to send a PR in and thanks for the amazing work as well ๐ŸŽ‰

AWS.ApiGatewayV2.import_api and reimport_api may not work as expected

Hello

I think I've found an issue with the way the APIGatewayV2 import/reimport code has been generated (tested with the current hex release 0.8.0):

I expect that AWS.ApiGatewayV2.reimport_api(client, rest_api_id, Jason.encode!(openapi_spec)) would import the openapi spec.
However, AWS returns an HTTP 415 (unsupported media type). I think the issue is related to the enforced use of send_body_as_binary?

I am able to successfully call the API if I disable the send_body_as_binary? as well as changing the expected return code to 200. However, in this case one has to provide a somehow unnatural input, see below:

AWS.ApiGatewayV2.reimport_api(client, rest_api_id, %{"body" => Jason.encode!(openapi_spec)})

Note: In this case body doesn't start with a capital B . (see https://docs.aws.amazon.com/apigatewayv2/latest/api-reference/apis-apiid.html#apis-apiid-schemas)

Interestingly importing an OpenAPI Spec works using the V1 API, where the generated code doesn't use the send_body_as_binary?

Please let me know if I can assist you in any way.

Best,
Andre

Error when I try to reach MTurk

Hey people,

I am very happy with your package, but something weird happens when I try to reach MTurk:

iex(4)> AWS.MTurk.get_account_balance(client, %{})
** (ArgumentError) argument error
    (crypto 4.8.2) crypto.erl:932: :crypto.hmac/3
    (aws 0.7.0) lib/aws/request.ex:438: AWS.Request.Internal.signing_key/4
    (aws 0.7.0) lib/aws/request.ex:232: AWS.Request.sign_v4/6
    (aws 0.7.0) lib/aws/request.ex:40: AWS.Request.request_post/5

If something is wrong with roles, authentication I've seen different errors. This is is caused by Erlang's crypto module and I haven't got the foggiest of what I have done wrong. I am using plug_crypto 1.2.2.
Could you please help me out here

thanks!

Cspr

0.5.1? Is New Tag for Session Token coming?

Hi- I see the session_token code on master and i've confirmed that it works - and that i'm unable to use the client without this token.

Is a updated tag available for this?

Support Mechanical Turk sandbox mode

Currently the endpoint prefix is hardcoded for mturk:

host = get_host("mturk-requester", client)
...
defp get_host(endpoint_prefix, client) do
    if client.region == "local" do
      "localhost"
    else
      "#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
    end
end

Since there is a sandbox development endpoint, it would be nice to support the "mturk-requester-sandbox" prefix as well.

The following lines allow passing the prefix through the options:

defp request(client, action, input, options) do
    prefix = Keyword.get(options, :endpoint_prefix, "mturk-requester")
    options = Keyword.delete(options, :endpoint_prefix)
    client = %{client | service: "mturk-requester"}
    host = get_host(prefix, client)
   ...
end

Then, one can pass the prefix like:

AWS.MechanicalTurk.list_h_i_ts(client, %{}, endpoint_prefix: "mturk-requester-sandbox")

Not sure if this is the best approach or if it would make more sense to pass this through the client though.

The code for the prefix fix: https://github.com/4knahs/aws-elixir/blob/mturk-sandbox/lib/aws/mechanical_turk.ex#L577

Support for S3 Compatible APIs

We use Linode's Object Storage. And it's fully compatible with S3.
When we try to use it with aws-elixir we get the following error.

iex(1)> client = AWS.Client.create("Access-Key", "Secret-Key", "us-east-1") |> Map.put(:endpoint, "us-east-1.linodeobjects.com") 
%AWS.Client{
  access_key_id: "access-key",
  endpoint: "us-east-1.linodeobjects.com",
  http_client: {AWS.HTTPClient, []},
  json_module: {AWS.JSON, []},
  port: 443,
  proto: "https",
  region: "us-east-1",
  secret_access_key: "secret-key",
  service: nil,
  session_token: nil,
  xml_module: {AWS.XML, []}
}

iex(2)> AWS.S3.list_buckets(client)
[info] TLS :client: In state :certify at ssl_handshake.erl:1901 generated CLIENT ALERT: Fatal - Handshake Failure
 - {:bad_cert, :hostname_check_failed}
{:error,
 {:tls_alert,
  {:handshake_failure,
   'TLS client: In state certify at ssl_handshake.erl:1901 generated CLIENT ALERT: Fatal - Handshake Failure\n {bad_cert,hostname_check_failed}'}}}

S3cmd and other tools work fine with S3-related operations with Linode's storage: Docs.

Would this issue be in scope of this project ?

The request signature does not match

Hey,

Running Elixir 1.12.3, OTP 24, MacOS 11.2.1 (Big Sur)

I've tried things first with v0.9.0, later changed deps in mix.exs to use master branch from git directly.

I can create a client with

iex(1)>client = AWS.Client.create(System.get_env("AWS_ACCESS_KEY_ID"), System.get_env("AWS_SECRET_ACCESS_KEY"), System.get_env("AWS_SESSION_TOKEN"),  "my region")
%AWS.Client{
  access_key_id: "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  endpoint: nil,
  http_client: {AWS.HTTPClient, []},
  json_module: {AWS.JSON, []},
  port: 443,
  proto: "https",
  region: "us-east-1",
  secret_access_key: "XXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  service: nil,
  session_token: "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
  xml_module: {AWS.XML, []}
}

iex(2)>AWS.S3.list_objects(client, "name of bucket")

All this works just fine. I get back my wanted results. All correct and works. But then I want to "describe" my AWS IoT Thing by its name. And this is at least to me most frustration part, that some of things work, some not even though it looks like my client is working as it should. Which was my first worry due to the fact that I am using AWS SSO, but token (see above) is taken well, and client works. Output of S3 buckets works great.

By default endpoint should be set to "amazonaws.com". Is this something I would need to modify for further queries towards AWS IoT?

AWS.IoT.describe_thing(client, "thing name")

I get back... the error below which I don't really understand.

iex(x)>AWS.IoT.describe_thing(client, "thing name") 
{:error,
 {:unexpected_response,
  %{
    body: "{\"message\":\"The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method. Consult the service documentation for details.\"}",
    headers: [
      {"Date", "Tue, 19 Oct 2021 12:34:06 GMT"},
      {"Content-Type", "application/json"},
      {"Content-Length", "192"},
      {"Connection", "close"},
      {"x-amzn-RequestId", "1f67a902-3ca7-4aa1-XXXX-ceXXXXXXXbf6"},
      {"Access-Control-Allow-Origin", "*"},
      {"x-amzn-ErrorType", "InvalidSignatureException"},
      {"x-amz-apigw-id", "HdIx2FRxFvTg="},
      {"X-Amzn-Trace-Id", "Root=1-616ebb3e-6badXXXXXXXXXXXX4b"}
    ],
    status_code: 403
  }}}

I saw a few kind of similar issues, but none of them really helps.
I would really appreciate some direction/help here how to approach this or what am I doing wrong.

Thanks in advance.

Best

S3.get_object with SSE-C signature error

When I try to get an object with AWS.S3.get_object/22 with an object encrypted with SSE-C, I get a mismatched signature error.

Code:

def get_object(client, bucket, filename, encryption_key, opts \\ []) do
  sse_customer_algorithm = "AES256"
  sse_customer_key = Base.encode64(encryption_key)
  sse_customer_key_md5 = Crypto.hash(encryption_key, :md5) |> Base.encode64()

  range =
    case opts[:range] do
      nil -> nil
      {from, to} -> "bytes=#{from}-#{to}"
    end

  AWS.S3.get_object(
    client,
    bucket,
    filename,
    _part_number = nil,
    _response_cache_control = nil,
    _response_content_disposition = nil,
    _response_content_encoding = nil,
    _response_content_language = nil,
    _response_content_type = nil,
    _response_expires = nil,
    _version_id = nil,
    _expected_bucket_owner = nil,
    _if_match = nil,
    _if_modified_since = nil,
    _if_none_match = nil,
    _if_unmodified_since = nil,
    range,
    _request_payer = nil,
    sse_customer_algorithm,
    sse_customer_key,
    sse_customer_key_md5,
    _options = []
  )
end

Response:

<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SignatureDoesNotMatch</Code>
<Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
<AWSAccessKeyId>***redacted***</AWSAccessKeyId>
<StringToSign>AWS4-HMAC-SHA256
20210403T192431Z
20210403/ca-central-1/s3/aws4_request
***redacted***</StringToSign>
<SignatureProvided>***redacted***</SignatureProvided>
<StringToSignBytes>***redacted***</StringToSignBytes>
<CanonicalRequest>GET
/my-bucket/09b6f2f2-020f-40d6-b66f-32cdb5561b95.zip

content-type:text/xml
host:s3.ca-central-1.amazonaws.com
x-amz-content-sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
x-amz-date:20210403T192431Z
x-amz-server-side-encryption-customer-algorithm:AES256
x-amz-server-side-encryption-customer-key:***redacted***
x-amz-server-side-encryption-customer-key-md5:***redacted***

content-type;host;x-amz-content-sha256;x-amz-date;x-amz-server-side-encryption-customer-algorithm;x-amz-server-side-encryption-customer-key;x-amz-server-side-encryption-customer-key-md5
e3b0c442***redacted***b855</CanonicalRequest>
<CanonicalRequestBytes>...(truncated)

I have verified that I am using the same key as the upload.

I'm stumped...

AWS.S3.get_object automatically decodes body as xml

I'm trying to download a file from S3 using AWS.S3.get_object(client, bucket, path) but the function fails with the error

** (FunctionClauseError) no function clause matching in :lists.prefix/2    
   
    The following arguments were given to :lists.prefix/2:
   
        # 1
        '<'
   
        # 2
        {:error, [],
         <<137, 80, 78, 71, 13, 10, 26, 10, 0, 0, 0, 13, 73, 72, 68, 82, 0, 0, 9, 225,
           0, 0, 6, 156, 8, 6, 0, 0, 0, 145, 146, 199, 218, 0, 0, 12, 23, 105, 67, 67,
           80, 73, 67, 67, 32, 80, 114, ...>>}
   
    (stdlib 3.13) lists.erl:192: :lists.prefix/2
    (xmerl 1.3.25) xmerl_scan.erl:3910: :xmerl_scan.scan_mandatory/5
    (xmerl 1.3.25) xmerl_scan.erl:572: :xmerl_scan.scan_document/2
    (xmerl 1.3.25) xmerl_scan.erl:291: :xmerl_scan.string/2
    (aws 0.7.0) lib/aws/xml.ex:42: AWS.XML.decode!/2
    (aws 0.7.0) lib/aws/request.ex:104: AWS.Request.request_rest/9

From what I see, the S3 module uses rest-xml as default protocol [1] which causes the response to be interpreted as a xml and decoded [2].
Is this the right way to download files or another function should be used?

[1] https://github.com/aws-beam/aws-elixir/blob/master/lib/aws/generated/s3.ex#L16
[2] https://github.com/aws-beam/aws-elixir/blob/master/lib/aws/request.ex#L104

Format code using mix format & validate

Currently there is no .formatter.exs. It would be great to add it, format all the project & validate in our CI that new additions are formatted (See mix format --check-formatted).

AWS Rekognition not working as per the docs

Hi Team,
I was trying to follow the generated docs for Rekognition service.
But I keep getting this error.

AWS.Rekognition.detect_text(client, "s3://s3_bucket/public/cococola/c_1001.jpg")
** (FunctionClauseError) no function clause matching in AWS.Request.encode!/3    
    
    The following arguments were given to AWS.Request.encode!/3:
    
        # 1
        #AWS.Client<
          endpoint: nil,
          http_client: {AWS.HTTPClient, []},
          json_module: {AWS.JSON, []},
          port: 443,
          proto: "https",
          region: "us-east-2",
          service: "rekognition",
          session_token: nil,
          xml_module: {AWS.XML, []},
          ...
        >
    
        # 2
        "json"
    
        # 3
        "s3://cookstro/public/cococola/c_1001.jpg"
    
    Attempted function clauses (showing 1 out of 1):
    
        defp encode!(%AWS.Client{} = client, protocol, payload) when protocol === "query" or protocol === "json" or protocol === "rest-json" or protocol === "rest-xml" and is_map(payload)
    
    (aws 0.10.0) lib/aws/request.ex:233: AWS.Request.encode!/3
    (aws 0.10.0) lib/aws/request.ex:42: AWS.Request.request_post/5
    ```
    
    We tried with `File.read!()` as well with no success.

Spurious error message causing system to not come up in Mac OS on production build

Tested on multiple versions of Mac OS, and same error message below is printed constantly. Strange thing is, it is working fine on other OSes. Does not seem to be an issue on a dev build.

(no logger present) unexpected logger message: {log,error,"~s~n",["beam/beam_load.c(1428): Error loading module 'Elixir.AWS.DataPipeline':\n  module name in object code is Elixir.AWS.Datapipeline\n"],#{error_logger=>#{emulator=>true,tag=>error},gl=><0.0.0>,pid=><0.2049.0>,time=>1630404393472978}}

Sign form upload support

Hey,
I'm using the library to implement file upload and trying to figure out how to do form signature. In the example, they are using hand written generator for signature.

Does the library has support for something similar ?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.