Coder Social home page Coder Social logo

amazonica's Introduction

AWS logo

Amazonica

A comprehensive Clojure client for the entire Amazon AWS API.

amazonica

Installation

Leiningen coordinates:

[amazonica "0.3.167"]

For Maven users:

add the following repository definition to your pom.xml:

<repository>
  <id>clojars.org</id>
  <url>http://clojars.org/repo</url>
</repository>

and the following dependency:

<dependency>
  <groupId>amazonica</groupId>
  <artifactId>amazonica</artifactId>
  <version>0.3.167</version>
</dependency>

Supported Services

Documentation

Minimum Viable Snippet:

(ns com.example
  (:use [amazonica.aws.ec2]))

(describe-instances)

(create-snapshot :volume-id   "vol-8a4857fa"
                 :description "my_new_snapshot")

Amazonica reflectively delegates to the Java client library, as such it supports the complete set of remote service calls implemented by each of the service-specific AWS client classes (e.g. AmazonEC2Client, AmazonS3Client, etc.), the documentation for which can be found in the AWS Javadocs. cljdoc function references are also available.

Reflection is used to create idiomatically named Clojure Vars in the library namespaces corresponding to the AWS service. camelCase Java methods become lower-case, hyphenated Clojure functions. So for example, if you want to create a snapshot of a running EC2 instance, you'd simply

(create-snapshot :volume-id "vol-8a4857fa"
                 :description "my_new_snapshot")

which delegates to the createSnapshot() method of AmazonEC2Client. If the Java method on the Amazon*Client takes a parameter, such as CreateSnapshotRequest in this case, the bean properties exposed via mutators of the form set* can be supplied as key-value pairs passed as arguments to the Clojure function.

All of the AWS Java apis (except S3) follow this pattern, either having a single implementation method which takes an AWS Java bean as its only argument, or being overloaded and having a no-arg implementation. The corresponding Clojure function will either require key-value pairs as arguments, or be variadic and allow a no-arg invocation.

For example, AmazonEC2Client's describeImages() method is overloaded, and can be invoked either with no args, or with a DescribeImagesRequest. So the Clojure invocation would look like

(describe-images)

or

(describe-images :owners ["self"]
                 :image-ids ["ami-f00f9699" "ami-e0d30c89"])

Conversion of Returned Types

java.util.Collections are converted to the corresponding Clojure collection type. java.util.Maps are converted to clojure.lang.IPersistentMaps, java.util.Lists are converted to clojure.lang.IPersistentVectors, etc.

java.util.Dates are automatically converted to Joda Time DateTime instances.

Amazon AWS object types are returned as Clojure maps, with conversion taking place recursively, so, "Clojure data all the way down."

For example, a call to

(describe-instances)

invokes a Java method on AmazonEC2Client which returns a com.amazonaws.services.ec2.model.DescribeInstancesResult. However, this is recursively converted to Clojure data, yielding a map of Reservations, like so:

{:owner-id "676820690883",
   :group-names ["cx"],
   :groups [{:group-name "cx", :group-id "sg-38f45150"}],
   :instances
   [{:instance-type "m1.large",
     :kernel-id "aki-825ea7eb",
     :hypervisor "xen",
     :state {:name "running", :code 16},
     :ebs-optimized false,
     :public-dns-name "ec2-154-73-176-213.compute-1.amazonaws.com",
     :root-device-name "/dev/sda1",
     :virtualization-type "paravirtual",
     :root-device-type "ebs",
     :block-device-mappings
     [{:device-name "/dev/sda1",
       :ebs
       {:status "attached",
        :volume-id "vol-b0e519c3",
        :attach-time #<DateTime 2013-03-21T22:00:56.000-07:00>,
        :delete-on-termination true}}],
     :network-interfaces [],
     :public-ip-address "164.73.176.213",
     :placement
     {:availability-zone "us-east-1a",
      :group-name "",
      :tenancy "default"},
     :private-ip-address "10.116.187.19",
     :security-groups [{:group-name "cx", :group-id "sg-38f45150"}],
     :state-transition-reason "",
     :private-dns-name "ip-10-116-187-19.ec2.internal",
     :instance-id "i-cefbe7a2",
     :key-name "cxci",
     :architecture "x86_64",
     :client-token "",
     :image-id "ami-baba68d3",
     :ami-launch-index 0,
     :monitoring {:state "disabled"},
     :product-codes [],
     :launch-time #<DateTime 2013-03-21T22:00:52.000-07:00>,
     :tags [{:value "CXCI_nightly", :key "Name"}]}],
   :reservation-id "r-8a23d6f7"}

If you look at the Reservation Javadoc you'll see that getGroups() returns a java.util.List of GroupIdentifiers, which is converted to a vector of maps containing keys :group-name and :group-id, under the :groups key. Ditto for :block-device-mappings and :tags, and so and so on...

Similar in concept to JSON unwrapping in Jackson, Amazonica supports root unwrapping of the returned data. So calling

; dynamodb
(list-tables)

by default would return

{:table-names ["TableOne" "TableTwo" "TableThree"]}

However, if you call

(set-root-unwrapping! true)

then single keyed top level maps will be "unwrapped" like so:

(list-tables)
=> ["TableOne" "TableTwo" "TableThree"]

The returned data can be "round tripped" as well. So the returned Clojure data structures can be supplied as arguments to function calls which delegate to Java methods taking the same object type as an argument. See the section below for more on this.

Argument Coercion

Coercion of any types that are part of the java.lang wrapper classes happens transparently. So for example, Clojure's preferred longs are automatically converted to ints where required.

Clojure data structures automatically participate in the Java Collections abstractions, and so no explicit coercion is necessary. Typically when service calls take collections as parameter arguments, as in the case above, the values in the collections are most often instances of the Java wrapper classes.

When complex objects consisting of types outside of those in the java.lang package are required as argument parameters, smart conversions are attempted based on the argument types of the underlying Java method signature. Methods requiring a java.util.Date argument can take Joda Time org.joda.time.base.AbstractInstants, longs, or Strings (default pattern is "yyyy-MM-dd"), with conversion happening automatically.

(set-date-format! "MM-dd-yyyy")

can be used to set the pattern supplied to the underlying java.text.SimpleDateFormat.

In cases where collection arguments contain instances of AWS "model" classes, Clojure maps will be converted to the appropriate AWS Java bean instance. So for example, describeAvailabilityZones() can take a DescribeAvailabilityZonesRequest which itself has a filters property, which is a java.util.List of com.amazonaws.services.ec2.model.Filters. Passing the filters argument would look like:

(describe-availability-zones :filters [{:name   "environment"
                                        :values ["dev" "qa" "staging"]}])

and return the following Clojure collection:

{:availability-zones
 [{:state "available",
   :region-name "us-east-1",
   :zone-name "us-east-1a",
   :messages []}
  {:state "available",
   :region-name "us-east-1",
   :zone-name "us-east-1b",
   :messages []}
  {:state "available",
   :region-name "us-east-1",
   :zone-name "us-east-1c",
   :messages []}
  {:state "available",
   :region-name "us-east-1",
   :zone-name "us-east-1d",
   :messages []}
  {:state "available",
   :region-name "us-east-1",
   :zone-name "us-east-1e",
   :messages []}]}

Extension points

Clojure apis built specifically to wrap a Java client, such as this one, often provide "conveniences" for the user of the api, to remove boilerplate. In Amazonica this is accomplished via the IMarshall protocol, which defines the contract for converting the returned Java result from the AWS service call to Clojure data, and the

(amazonica.core/register-coercions)

function, which takes a map of class/function pairs defining how a value should be coerced to a specific AWS Java bean. You can find a good example of this in the amazonica.aws.dynamodb namespace. Consider the following DynamoDB service call:

(get-item :table-name "MyTable"
          :key "foo")

The GetItemRequest takes a com.amazonaws.services.dynamodb.model.Key which is composed of a hash key of type com.amazonaws.services.dynamodb.model.AttributeValue and optional range key also of type AttributeValue. Without the coercions registered for Key and AttributeValue in amazonica.aws.dynamodb we would need to write:

(get-item :table-name "TestTable"
          :key {:hash-key-element {:s "foo"}})

Note that either form will work. This allows contributors to the library to incrementally evolve the api independently from the core of the library, as well as maintain backward compatibility of existing code written against prior versions of the library which didn't contain the conveniences.

Authentication

The default authentication scheme is to use the chained Provider class from the AWS SDK, whereby authentication is attempted in the following order:

  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY
  • Java System Properties - aws.accessKeyId and aws.secretKey
  • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
  • Instance profile credentials delivered through the Amazon EC2 metadata service

Note that in order for the Instance Profile Metadata to be found, you must have launched the instance with a provided IAM role, and the same permissions as the IAM Role the instance was launched with will apply.

See the AWS docs for reference.

Additionally, all of the functions may take as their first argument an optional map of credentials:

(def cred {:access-key "aws-access-key"
           :secret-key "aws-secret-key"
           :endpoint   "us-west-1"})

(describe-instances cred)

The credentials map may contain zero or one of the following:

In addition, the credentials map may contain an :endpoint entry. If the value of the :endpoint key is a lower case, hyphenated translation of one of the Regions enums, .withRegion will be used to build the Client, otherwise .withEndpointConfiguration will be used.

Note: The first function called (for each distinct AWS service namespace, e.g. amazonica.aws.ec2) creates an Amazon*Client, which is effectively cached via memoization. Therefore, if you explicitly pass different credentials maps to different functions, you will effectively have different Clients.

For example, to work with ec2 instances in different regions you might do something like:

(ec2/create-image {:endpoint "us-east-1"} :instance-id "i-1b9a9f71")

(ec2/create-image {:endpoint "us-west-2"} :instance-id "i-kj239d7d")

You will have created two AmazonEC2Clients, pointing to the two different regions. Likewise, if you omit the explicit credentials map then the DefaultAWSCredentialsProviderChain will be used. So in the following scenario you will again have two different Amazon*Clients:

(set-s3client-options :path-style-access true)

(create-bucket credentials "foo")

The call to set-s3client-options will use a DefaultAWSCredentialsProviderChain, while the create-bucket call will create a separate AmazonS3Client with BasicAWSCredentials.

As a convenience, users may call (defcredential) before invoking any service functions and passing in their AWS key pair and an optional endpoint:

(defcredential "aws-access-key" "aws-secret-key" "us-west-1")

All subsequent API calls will use the specified credential. If you need to execute a service call with alternate credentials, or against a different region than the one passed to (defcredential), you can wrap these ad-hoc calls in the (with-credential) macro, which takes a vector of key pair credentials and an optional endpoint, like so:

(defcredential "account-1-aws-access-key" "aws-secret-key" "us-west-1")

(describe-instances)
; returns instances visible to account-1

(with-credential ["account-2-aws-access-key" "secret" "us-east-1"]
  (describe-instances))
; returns EC2 instances visible to account-2 running in US-East region

(describe-images :owners ["self"])
; returns images belonging to account-1

Client configuration

You can supply a :client-config entry in the credentials map to configure the ClientConfiguration that the Amazon client uses. This is useful if you need to use a proxy.

(describe-images {:client-config {:proxy-host "proxy.address.com" :proxy-port 8080}})

localstack specific hints

When using localstack (or other AWS mocks) it may be necessary to pass some configuration to the client.

This cannot be done anymore via the (set-s3client-options :path-style-access true) which would lead to a Client is immutable when created with the builder. exception

This is particularly useful for S3, which in a typical localstack scenario needs path-style-access set to true

This is a working example, please note the config keys

(s3/list-buckets
  {:client-config {
    :path-style-access-enabled true
    :chunked-encoding-disabled false
    :accelerate-mode-enabled false
    :payload-signing-enabled true
    :dualstack-enabled true
    :force-global-bucket-access-enabled true}})

Exception Handling

All functions throw com.amazonaws.AmazonServiceExceptions. If you wish to catch exceptions you can convert the AWS object to a Clojure map like so:

(try
  (create-snapshot :volume-id "vol-ahsg23h"
                   :description "daily backup")
  (catch Exception e
    (log (ex->map e))))

; {:error-code "InvalidParameterValue",
;  :error-type "Unknown",
;  :status-code 400,
;  :request-id "9ba69e16-ed63-41d4-ac02-1f6032cb64de",
;  :service-name "AmazonEC2",
;  :message
;  "Value (vol-ahsg23h) for parameter volumeId is invalid. Expected: 'vol-...'.",
;  :stack-trace "Status Code: 400, AWS Service: AmazonEC2, AWS Request ID: a5b0340a-8f37-4122-941c-ed8d5472b11d, AWS Error Code: InvalidParameterValue, AWS Error Message: Value (vol-ahsg23h) for parameter volumeId is invalid. Expected: 'vol-...'.
;  at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:644)
;   at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:338)
;   at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:190)
;   at com.amazonaws.services.ec2.AmazonEC2Client.invoke(AmazonEC2Client.java:6199)
;   at com.amazonaws.services.ec2.AmazonEC2Client.createSnapshot(AmazonEC2Client.java:1531)
;   .....

For the memory constrained

If you're especially concerned about the size of your uberjar, you can limit the transitive dependencies pulled in by the AWS Java SDK, which currently total 35mb. You'll need to exclude the entire AWS Java SDK, the Amazon Kinesis Client, and the DynamoDB streams kinesis adapter, and then add back only those services you'll be using (although core is always required). So for example, if you were only using S3, you could restrict the dependencies to only include the required jars like so:

:dependencies [[org.clojure/clojure "1.10.3"]
               [amazonica "0.3.156" :exclusions [com.amazonaws/aws-java-sdk
                                                 com.amazonaws/amazon-kinesis-client
                                                 com.amazonaws/dynamodb-streams-kinesis-adapter]]
               [com.amazonaws/aws-java-sdk-core "1.11.968"]
               [com.amazonaws/aws-java-sdk-s3 "1.11.968"]]

Running the tests

As always, lein test will run all the tests. Note that some of the namespaces require the file ~/.aws/credentials to be present and be of the same form as required by the official AWS tools:

[default]
aws_access_key_id = AKIAABCDEFGHIEJK
aws_secret_access_key = 6rqzvpAbcd1234++zyx987WUV654sRq

Performance

Amazonica uses reflection extensively, to generate the public Vars, to set the bean properties passed as arguments to those functions, and to invoke the actual service method calls on the underlying AWS Client class. As such, one may wonder if such pervasive use of reflection will result in unacceptable performance. In general, this shouldn't be an issue, as the cost of reflection should be relatively minimal compared to the latency incurred by making a remote call across the network. Furthermore, typical AWS usage is not going to be terribly concerned with performance, except with specific services such as DynamoDB, RDS, SimpleDB, or SQS. But we have done some basic benchmarking against the excellent DynamoDB rotary library, which uses no explicit reflection. Results are shown below. Benchmarking code is available at https://github.com/mcohen01/amazonica-benchmark

Benchmark results

Examples

Autoscaling

(ns com.example
  (:use [amazonica.aws.autoscaling]))

(create-launch-configuration :launch-configuration-name "aws_launch_cfg"
                             :block-device-mappings [
                              {:device-name "/dev/sda1"
                               :virtual-name "vol-b0e519c3"
                               :ebs {:snapshot-id "snap-36295e51"
                                     :volume-size 32}}]
                             :ebs-optimized true
                             :image-id "ami-6fde0d06"
                             :instance-type "m1.large"
                             :spot-price ".10")

(create-auto-scaling-group :auto-scaling-group-name "aws_autoscale_grp"
                           :availability-zones ["us-east-1a" "us-east-1b"]
                           :desired-capacity 3
                           :health-check-grace-period 300
                           :health-check-type "EC2"
                           :launch-configuration-name "aws_launch_cfg"
                           :min-size 3
                           :max-size 3)

(describe-auto-scaling-instances)

Batch

(ns com.example
  (:use [amazonica.aws.batch]))

(submit-job :job-name "my-job"
            :job-definition "my-job-definition"
            :job-queue "my-job-queue"
            :parameters {"example-url" "example.com"})

CloudFormation

(ns com.example
  (:use [amazonica.aws.cloudformation]))

(create-stack :stack-name "my-stack"
              :template-url "abcd1234.s3.amazonaws.com")

(describe-stack-resources :stack-name "my_cloud_stack")

CloudFront

(ns com.example
  (:use [amazonica.aws.cloudfront]))

(create-distribution  :distribution-config {
                      :enabled true
                      :default-root-object "index.html"
                      :origins
                       {:quantity 0
                        :items []}
                      :logging
                       {:enabled false
                        :include-cookies false
                        :bucket "abcd1234.s3.amazonaws.com"
                        :prefix "cflog_"}
                      :caller-reference 12345
                      :aliases
                       {:items ["m.example.com" "www.example.com"]
                        :quantity 2}
                      :cache-behaviors
                       {:quantity 0
                        :items []}
                      :comment "example"
                      :default-cache-behavior
                       {:target-origin-id "MyOrigin"
                        :forwarded-values
                          {:query-string false
                           :cookies
                             {:forward "none"}}}
                       :trusted-signers
                         {:enabled false
                          :quantity 0}
                       :viewer-protocol-policy "allow-all"
                       :min-ttl 3600}
                      :price-class "PriceClass_All"})

(list-distributions :max-items 10)

CloudSearch

(ns com.example
  (:use [amazonica.aws.cloudsearch]))

(create-domain :domain-name "my-index")

(index-documents :domain-name "my-index")

CloudSearchV2

(ns com.example
  (:use [amazonica.aws.cloudsearchv2]))

(create-domain :domain-name "my-index")

(index-documents :domain-name "my-index")

(build-suggesters :domain-name "my-index")

(list-domains)

CloudSearchDomain

;; get the document and search service endpoints
(clojure.pprint/pprint
  (amazonica.aws.cloudsearchv2/describe-domains))

(csd/set-endpoint "doc-domain-name-6fihexkq1234567895wm.us-east-1.cloudsearch.amazonaws.com")

(csd/upload-documents
  :content-type "application/json"
  :documents (io/input-stream json-documents))

(csd/set-endpoint "search-domain-name-6fihexkq1234567895wm.us-east-1.cloudsearch.amazonaws.com")

;; http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/cloudsearchdomain/model/SearchRequest.html
(csd/search :query "drumpf")

(csd/suggest :query "{\"query\": \"make donald drumpf\""
             :suggester "url_suggester")

CloudWatch

(ns com.example
  (:use [amazonica.aws.cloudwatch]))

(put-metric-alarm :alarm-name "my-alarm"
                  :actions-enabled true
                  :evaluation-periods 5
                  :period 60
                  :metric-name "CPU"
                  :threshold "50%")

To put metric data. UnitTypes

(put-metric-data
    {:endpoint "us-west-1"} ;; Defaults to us-east-1
    :namespace "test_namespace"
    :metric-data [{:metric-name "test_metric"
                   :unit "Count"
                   :value 1.0
                   :dimensions [{:name "test_name" :value "test_value"}]}])

To batch get metric data.

(get-metric-data
 :start-time "2018-06-14T00:00:00Z"
 :end-time   "2018-06-15T00:00:00Z"
 :metric-data-queries [{:id "test"
                        :metricStat {:metric {:namespace "AWS/DynamoDB"
                                              :metricName "ProvisionedReadCapacityUnits"
                                              :dimensions [{:name "TableName"
                                                            :value "MyTableName"}]}
                                     :period 86400
                                     :stat "Sum"
                                     :unit "Count"}}])

CloudWatchEvents

(ns com.example
  (:use [amazonica.aws.cloudwatchevents]))

(put-rule
    :name "nightly-backup"
    :description "Backup DB nightly at 10:00 UTC (2 AM or 3 AM Pacific)"
    :schedule-expression "cron(0 10 * * ? *)")

(put-targets
    :rule "nightly-backup"
    :targets [{:id    "backup-lambda"
               :arn   "arn:aws:lambda:us-east-1:123456789012:function:backup-lambda"
               :input (json/write-str {"whatever" "arguments"})}])

CodeDeploy

(ns com.example
  (:use [amazonica.aws.codedeploy]))

(list-applications)

CognitoIdentityProviders

(ns com.example
  (:require [amazonica.aws.cognitoidp :refer :all]))

(list-user-pools {:max-results 2})
=> {:user-pools [{:lambda-config {}, :last-modified-date "2017-06-16T14:16:28.950-03:00"], :creation-date "2017-06-15T16:23:04.555-03:00"], :name "Amazonica", :id "us-west-1_example"}]}

Comprehend

(ns com.example
(:require [amazonica.aws.comprehend :refer :all]))

(amazonica.aws.comprehend/detect-entities {:language-code "en" :text "Hi my name is Joe Bloggs and I live in Glasgow, Scotland"})
=> {:entities [{:type "PERSON", :text "Joe Bloggs", :score 0.99758613, :begin-offset 14, :end-offset 24} {:type "LOCATION", :text "Glasgow, Scotland", :score 0.93267196, :begin-offset 39, :end-offset 56}]}

DataPipeline

(ns com.example
  (:use [amazonica.aws.datapipeline]))

(create-pipeline :name "my-pipeline"
                 :unique-id "mp")

(put-pipeline-definition  :pipeline-id "df-07746012XJFK4DK1D4QW"
                          :pipeline-objects [{:name "my-pipeline-object"
                                              :id "my-pl-object-id"
                                              :fields [{:key "some-key"
                                                        :string-value "foobar"}]}])

(list-pipelines)

(delete-pipeline :pipeline-id pid)

DynamoDBV2

(ns com.example
  (:use [amazonica.aws.dynamodbv2]))

(def cred {:access-key "aws-access-key"
           :secret-key "aws-secret-key"
           :endpoint   "http://localhost:8000"})

(create-table cred
              :table-name "TestTable"
              :key-schema
                [{:attribute-name "id"   :key-type "HASH"}
                 {:attribute-name "date" :key-type "RANGE"}]
              :attribute-definitions
                [{:attribute-name "id"      :attribute-type "S"}
                 {:attribute-name "date"    :attribute-type "N"}
                 {:attribute-name "column1" :attribute-type "S"}
                 {:attribute-name "column2" :attribute-type "S"}]
              :local-secondary-indexes
                [{:index-name "column1_idx"
                  :key-schema
                   [{:attribute-name "id"   :key-type "HASH"}
                    {:attribute-name "column1" :key-type "RANGE"}]
                 :projection
                   {:projection-type "INCLUDE"
                    :non-key-attributes ["id" "date" "column1"]}}
                 {:index-name "column2_idx"
                  :key-schema
                   [{:attribute-name "id"   :key-type "HASH"}
                    {:attribute-name "column2" :key-type "RANGE"}]
                 :projection {:projection-type "ALL"}}]
              :provisioned-throughput
                {:read-capacity-units 1
                 :write-capacity-units 1})

(put-item cred
          :table-name "TestTable"
          :return-consumed-capacity "TOTAL"
          :return-item-collection-metrics "SIZE"
          :item {
            :id "foo"
            :date 123456
            :text "barbaz"
            :column1 "first name"
            :column2 "last name"
            :numberSet #{1 2 3}
            :stringSet #{"foo" "bar"}
            :mixedList [1 "foo"]
            :mixedMap {:name "baz" :secret 42}})

(get-item cred
          :table-name "TestTable"
          :key {:id {:s "foo"}
                :date {:n 123456}})

(query cred
       :table-name "TestTable"
       :limit 1
       :index-name "column1_idx"
       :select "ALL_ATTRIBUTES"
       :scan-index-forward true
       :key-conditions
        {:id      {:attribute-value-list ["foo"]      :comparison-operator "EQ"}
         :column1 {:attribute-value-list ["first na"] :comparison-operator "BEGINS_WITH"}})

(batch-write-item
  cred
  :return-consumed-capacity "TOTAL"
  :return-item-collection-metrics "SIZE"
  :request-items
    {"TestTable"
      [{:delete-request
         {:key {:id "foo"
                :date 123456}}}
       {:put-request
         {:item {:id "foobar"
                 :date 3172671
                 :text "bunny"
                 :column1 "funky"}}}]})

;; dynamodb-expressions https://github.com/brabster/dynamodb-expressions
;; exists to make update expressions easier to write for Amazonica.
(update-item
  cred
  :table-name "TestTable"
  :key {:id "foo"}
  :update-expression "ADD #my_foo :x SET bar.baz = :y"
  :expression-attribute-names {"#my_foo" "my-foo"}
  :expression-attribute-values {":x" 1
                                ":y" "barbaz"})

(batch-get-item
  cred
  :return-consumed-capacity "TOTAL"
  :request-items {
  "TestTable" {:keys [{"id"   {:s "foobar"}
                       "date" {:n 3172671}}
                      {"id"   {:s "foo"}
                       "date" {:n 123456}}]
               :consistent-read true
               :attributes-to-get ["id" "text" "column1"]}})

(scan cred :table-name "TestTable")

(describe-table cred :table-name "TestTable")

(delete-table cred :table-name "TestTable")

;; Amazonica depends on `[com.amazonaws/amazon-kinesis-client]`
;; which has a dependency on `[com.amazonaw/aws-java-sdk-dynamodb]`.
;; The version of this dependency is too old to support TTL,
;; so you'll need to exclude it and explicitly depend on a recent version
;; of `com.amazonaw/aws-java-sdk-dynamodb` like `1.0.9` to use this feature for now.

(update-time-to-live
  cred
  :table-name "TestTable"
  :time-to-live-specification {:attribute-name "foo" :enabled true}

EC2

(ns com.example
  (:use [amazonica.aws.ec2]))

(-> (run-instances :image-id "ami-54f71039"
                   :instance-type "c3.large"
                   :min-count 1
                   :max-count 1)
    (get-in [:reservation :instances 0 :instance-id]))

(describe-images :owners ["self"])

(describe-instances :filters [{:name "tag:env" :values ["production"]}])

(create-image :name "my_test_image"
              :instance-id "i-1b9a9f71"
              :description "test image - safe to delete"
              :block-device-mappings [
                {:device-name  "/dev/sda1"
                 :virtual-name "myvirtual"
                 :ebs {
                   :volume-size 8
                   :volume-type "standard"
                   :delete-on-termination true}}])

(create-snapshot :volume-id   "vol-8a4857fa"
                 :description "my_new_snapshot")

EC2InstanceConnect

(ns com.example
  (:require [amazonica.aws.ec2instanceconnect :refer :all]))

(send-ssh-public-key :availability-zone "eu-west-1"
                     :instance-id "i-1b9a9f71a756fe98"
                     :instance-os-user "ec2-user"
                     :ssh-public-key (slurp "/path/to/public/ssh/key"))

ECS

(ns com.example
  (:require [amazonica.aws.ecs :refer :all]))

(register-task-definition
 {:family "grafana2",
  :container-definitions [{:name "grafana2"
                           :image "bbinet/grafana2",
                           :port-mappings [{:container-port 3000, :host-port 3000}]
                           :memory 300
                           :cpu 300
                           }]})
(describe-task-definition :task-definition "grafana2")
(list-task-definitions :family-prefix "grafana2")

;; create cluster
(create-cluster :cluster-name "Amazonica")

(list-clusters)
(describe-clusters)

(create-service :cluster "Amazonica"
                :service-name "grafana2"
                :task-definition "grafana2" :desired-count 1
                ;;:role "ecsServiceRole"
                ;;:load-balancers [...]
                )
(list-services :cluster "Amazonica")
(describe-services :cluster "Amazonica" :services ["grafana2"])

;; add ec2 instances to your cluster

(update-service :cluster "Amazonica" :service "grafana2" :desired-count 0)
(delete-service :cluster "Amazonica" :service "grafana2")
(delete-cluster :cluster "Amazonica")

;; run task
(run-task
  :cluster "Amazonica"
  :launch-type LaunchType/FARGATE
  :task-definition "task-def-name"
  :overrides {:container-overrides [{:name    "container-name"
                                     :command ["java" "-jar" "artifact.jar" "arg1" "arg2"]}]}
  :network-configuration {:aws-vpc-configuration {:assign-public-ip AssignPublicIp/ENABLED
                                                  :subnets          ["subnet-XXXXXXXX"]
                                                  :security-groups  ["sg-XXXXXXXXXXXXXXXX"]}})

ECR

(require '[amazonica.aws.ecr :as ecr])

(ecr/describe-repositories {})

(ecr/create-repository :repository-name "amazonica")

(ecr/get-authorization-token {})

(ecr/list-images :repository-name "amazonica")

(ecr/delete-repository :repository-name "amazonica")

Elasticache

(ns com.example
  (:use [amazonica.aws.elasticache]))

(describe-cache-engine-versions)

(create-cache-cluster :engine "memcached"
                      :engine-version "1.4.14"
                      :num-cache-nodes 1
                      :cache-node-type "cache.t1.micro"
                      :cache-cluster-id "memcached-cluster")

(describe-cache-clusters)

(describe-events)

(delete-cache-cluster :cache-cluster-id "memcached-cluster")

ElasticBeanstalk

(ns com.example
  (:use [amazonica.aws.elasticbeanstalk]))

(describe-applications)

(describe-environments)

(create-environment creds
                    {:application-name "app"
                     :environment-name "env"
                     :version-label "1.0"
                     :solution-stack-name "64bit Amazon Linux 2014.09 v1.0.9 running Docker 1.2.0"
                     :option-settings [{:namespace "aws:elb:loadbalancer"
                                        :option-name "LoadBalancerHTTPSPort"
                                        :value "443"}
                                       {:namespace "aws:elb:loadbalancer"
                                        :option-name "LoadBalancerHTTPPort"
                                        :value "OFF"}]}))

(describe-configuration-settings {:application-name "app" :environment-name "env"})

ElasticLoadBalancing

(ns com.example
  (:use [amazonica.aws.elasticloadbalancing]))

(deregister-instances-from-load-balancer :load-balancer-name "my-ELB"
                                         :instances [{:instance-id "i-1ed40bad"}])

(register-instances-with-load-balancer :load-balancer-name "my-ELB"
                                       :instances [{:instance-id "i-1fa370ea"}])

ElasticMapReduce

(ns com.example
  (:use [amazonica.aws
          elasticmapreduce
          s3]))

(create-bucket :bucket-name "emr-logs"
               :access-control-list {:grant-permission ["LogDelivery" "Write"]})

(set-bucket-logging-configuration :bucket-name "emr-logs"
                                  :logging-configuration
                                    {:log-file-prefix "hadoop-job_"
                                     :destination-bucket-name "emr-logs"})

(run-job-flow :name "my-job-flow"
              :log-uri "s3n://emr-logs/logs"
              :instances
                {:instance-groups [
                   {:instance-type "m1.large"
                    :instance-role "MASTER"
                    :instance-count 1
                    :market "SPOT"
                    :bid-price "0.10"}]}
              :steps [
                {:name "my-step"
                 :hadoop-jar-step
                   {:jar "s3n://beee0534-ad04-4143-9894-8ddb0e4ebd31/hadoop-jobs/bigml"
                    :main-class "bigml.core"
                    :args ["s3n://beee0534-ad04-4143-9894-8ddb0e4ebd31/data" "output"]}}])

(list-clusters)

(describe-cluster :cluster-id "j-38BW9W0NN8YGV")

(list-steps :cluster-id "j-38BW9W0NN8YGV")

(list-bootstrap-actions :cluster-id "j-38BW9W0NN8YGV")

ElasticsearchService

(ns com.example
  (:use [amazonica.awselasticsarch]))

(list-domain-names {})

ElasticTranscoder

(ns com.example
(:use [amazonica.aws.elastictranscoder))

(list-pipelines)
;; -> {:pipelines []}

(list-presets)
;; -> {:presets [{:description "System preset generic 1080p", ....}]}

;; status can be :Submitted :Progressing :Complete :Canceled :Error
(list-jobs-by-status :status :Complete)
;; -> ...

(def new-pipeline-id (-> (create-pipeline
                           :role "arn:aws:iam::289431957111:role/Elastic_Transcoder_Default_Role",
                           :name "avi-to-mp4",
                           :input-bucket "avi-to-convert",
                           :output-bucket "converted-mp4")
                       :pipeline
                       :id))
;; -> "1111111111111-11aa11"

(create-job :pipeline-id "1111111111111-11aa11"
            :input {:key "my/s3/input/obj/key.avi"}
            :outputs [{:key "my/s3/output/obj/key.avi"
                       :preset-id "1351620000001-000030"}])

Forecast

(require '[amazonica.aws.forecast :as fc])

(fc/create-dataset :dataset-name "hourly_ts"
                   :data-frequency "H"
                   :dataset-type "TARGET_TIME_SERIES"
                   :domain "CUSTOM"
                   :schema {
                     :attributes [
                       {
                         :attribute-name "timestamp"
                         :attribute-type "timestamp"
                       },
                       {
                         :attribute-name "target_value"
                         :attribute-type "float"
                       },
                       {
                         :attribute-name "item_id"
                         :attribute-type "string"
                       }]})

;; {:dataset-arn "arn:aws:forecast:us-east-1:123456789012:dataset/hourly_ts"}

(fc/create-dataset-group :dataset-arns ["arn:aws:forecast:us-east-1:123456789012:dataset/hourly_ts"]
                         :dataset-group-name "hourly_ts"
                         :domain "CUSTOM")

;; {:dataset-group-arn "arn:aws:forecast:us-east-1:123456789012:dataset-group/hourly_ts"}


(require '[amazonica.aws.s3 :as s3])

(s3/put-object "amazonica-forecast"
               "hourly_ts.csv"
               (java.io.File. "/path/to/hourly_ts.csv"))


(fc/create-dataset-import-job :dataset-import-job-name "import_hourly_ts_job"
                              :dataset-arn "arn:aws:forecast:us-east-1:123456789012:dataset/hourly_ts"
                              :data-source {
                                :s3-config {
                                  :path "s3://amazonica-forecast/hourly_ts.csv"
                                  :role-arn "arn:aws:iam::123456789012:role/amazonica"}})

;; {:dataset-import-job-arn "arn:aws:forecast:us-east-1:123456789012:dataset-import-job/hourly_ts/import_hourly_ts_job"}

(fc/create-predictor :input-data-config {
                       :dataset-group-arn "arn:aws:forecast:us-east-1:123456789012:dataset-group/hourly_ts"}
                     :algorithm-arn "arn:aws:forecast:::algorithm/ARIMA"
                     :forecast-horizon 336
                     :featurization-config {
                       :forecast-frequency "H"}
                     :predictor-name "hourly_ts_predictor")

(fc/create-forecast :forecast-name "hourly_ts"
                    :predictor-arn "arn:aws:forecast:us-east-1:123456789012:predictor/hourly_ts_predictor")


(require '[amazonica.aws.forecastquery :as fq])

(fq/query-forecast :forecast-arn "arn:aws:forecast:us-east-1:123456789012:forecast/hourly_ts"
                   :filters {"item_id" "item1"})

(fc/delete-forecast :forecast-arn "arn:aws:forecast:us-east-1:123456789012:forecast/hourly_ts")

(fc/delete-predictor :predictor-arn "arn:aws:forecast:us-east-1:123456789012:predictor/hourly_ts_predictor")

(fc/delete-dataset :dataset-arn "arn:aws:forecast:us-east-1:123456789012:dataset/hourly_ts")

(fc/delete-dataset-group :dataset-group-arn "arn:aws:forecast:us-east-1:123456789012:dataset-group/hourly_ts")

Glacier

(ns com.example
  (:use [amazonica.aws.glacier]))

(create-vault :vault-name "my-vault")

(describe-vault :vault-name "my-vault")

(list-vaults :limit 10)

(upload-archive :vault-name "my-vault"
                :body "upload.txt")

(delete-archive :account-id "-"
                :vault-name "my-vault"
                :archive-id "pgy30P2FTNu_d7buSVrGawDsfKczlrCG7Hy6MQg53ibeIGXNFZjElYMYFm90mHEUgEbqjwHqPLVko24HWy7DU9roCnZ1djEmT-1REvnHKHGPgkuzVlMIYk3bn3XhqxLJ2qS22EYgzg", :checksum "83a05fd1ce759e401b44fff8f34d40e17236bbdd24d771ec2ca4886b875430f9", :location "/676820690883/vaults/my-vault/archives/pgy30P2FTNu_d7buSVrGawDsfKczlrCG7Hy6MQg53ibeIGXNFZjElYMYFm90mHEUgEbqjwHqPLVko24HWy7DU9roCnZ1djEmT-1REvnHKHGPgkuzVlMIYk3bn3XhqxLJ2qS22EYgzg")

(delete-vault :vault-name "my-vault")

IdentityManagement

(ns com.example
  (:use [amazonica.aws.identitymanagement]))

(def policy "{\"Version\": \"2012-10-17\", \"Statement\": [{\"Action\": [\"s3:*\"], \"Effect\": \"Allow\", \"Resource\": [\"arn:aws:s3:::bucket-name/*\"]}]}")

(create-user :user-name "amazonica")
(create-access-key :user-name "amazonica")
(put-user-policy
  :user-name "amazonica"
  :policy-name "s3policy"
  :policy-document policy)

IoT

(ns com.example
  (:require [amazonica.aws.iot :refer :all]))

(list-things {})
;; => {:things [{:thing-name "YourThing"}]}

(create-thing :thing-name "MyThing")
;; => {:thing-name "MyThing" :thing-arn "arn:aws:iot:...thing/MyThing"}
(ns com.example
  (:require [amazonica.aws.iotdata :refer :all]))

(get-thing-shadow :thing-name "MyThing")

Kinesis

(ns com.example
  (:use [amazonica.aws.kinesis]))

(create-stream "my-stream" 1)

(list-streams)

(describe-stream "my-stream")

(merge-shards "my-stream" "shardId-000000000000" "shardId-000000000001")

(split-shard "my-stream" "shard-id" "new-starting-hash-key")


;; write to the stream
;; #'put-record takes the name of the stream, any value as data, and the partition key
(let [data {:name "any data"
            :col  #{"anything" "at" "all"}
            :date now}]
  (put-record "my-stream"
              data
              (str (UUID/randomUUID))))
;; if anything BUT a java.nio.ByteBuffer is supplied as the second
;; argument, then the data will be transparently serialized and compressed
;; using Nippy, and deserialized on calls to (get-records), or via a worker
;; (see below), provided that a deserializer function is NOT supplied. If
;; you do pass a ByteBuffer instance as the data argument, then you'll need
;; to also provide a deserializer function when fetching records.


;; For bulk uploading, we provide a `put-records` function which takes in a sequence of maps
;; that contain the partition-key and data.  As with `put-record` the data will be handled via
;; Nippy if it is not of a `java.nio.ByteBuffer`.
(put-records "my-stream"
             [{:partition-key "x5h2ch" :data ["foo" "bar" "baz"]}
              {:partition-key "x5j3ak" :data ["quux"]}])


;; optional :deserializer function which will be passed the raw
;; java.nio.ByteBuffer representing the data blob of each record
(defn- get-raw-bytes [byte-buffer]
  (let [b (byte-array (.remaining byte-buffer))]
    (.get byte-buffer b)
    b))

;; manually read from a specific shard
;; this is not the preferred way to consume a shard
(get-records :deserializer get-raw-bytes
             :shard-iterator (get-shard-iterator "my-stream"
                                                 shard-id
                                                 "TRIM_HORIZON"))
;; if no :deserializer function is supplied then it will be assumed
;; that the records were put into Kinesis by Amazonica, and hence,
;; the data was serialized and compressed by Nippy (e.g. Snappy)



;; better way to consume a shard....create and run a worker
;; :app :stream and :processor keys are required
;; :credentials, :checkpoint and :dynamodb-adaptor-client? keys are optional

;; if no :checkpoint is provided the worker will automatically checkpoint every 60 seconds
;; alternatively, supply a numeric value for duration in seconds between checkpoints
;; for full checkpoint control, set :checkpoint to false and return true from the
;; :processor function only when you want checkpoint to be called

;; if no :credentials key is provided the default authentication scheme is used (preferable),
;; see the [Authentication] #(authentication) section above

;; if no :dynamodb-adaptor-client? is provided, then it defaults to not using the
;; DynamoDB Streams Kinesis Adaptor. Set this flag to true when consuming streams
;; from DynamoDB

;; returns the UUID assigned to this worker
(worker! :app "app-name"
         :stream "my-stream"
         :checkpoint false ;; default to disabled checkpointing, can still force
                           ;; a checkpoint by returning true from the processor function
         :processor (fn [records]
                      (doseq [row records]
                        (println (:data row)
                                 (:sequence-number row)
                                 (:partition-key row)))))

(delete-stream "my-stream")

Kinesis Analytics

(ns com.example
  (:require [amazonica.aws.kinesisanalytics :as ka]))

(ka/create-application
  :application-name "my-ka-app"
  :inputs [
    {:name-prefix "prefix_"
     :input-schema {:record-format {:record-format-type "json"}}
     :kinesis-treams-input {:resource-ARN "fobar"}}
  ]
  :outputs [...]})

KinesisFirehose

(ns com.example
  (:require [amazonica.aws.kinesisfirehose :as fh])
  (:import [java.nio ByteBuffer]))

;; List delivery streams
(fh/list-delivery-streams)
;; => {:delivery-stream-names ("test-firehose" "test-firehose-2"), :has-more-delivery-streams false}

(fh/describe-delivery-stream :delivery-stream-name "my-test-firehose")
;; => {:delivery-stream-description
;;       {:version-id "2", ....}}

(fh/create-delivery-stream :delivery-stream-name "my-test-firehose-2"
                           :s3DestinationConfiguration {:role-arn  "arn:aws:iam:xxxx:role/firehose_delivery_role",
                                                        :bucket-arn "arn:aws:s3:::my-test-bucket"})
;; => {:delivery-stream-arn "arn:aws:firehose:us-west-2:xxxxx:deliverystream/my-test-firehose-2"}

;; Describe delivery stream
(fh/describe-delivery-stream cred :delivery-stream-name stream-name)

;; Update destination
(fh/update-destination cred {:current-delivery-stream-version-id version-id
                             :delivery-stream-name stream-name
                             :destination-id destination-id
                             :s3-destination-update {:BucketARN (str "arn:aws:s3:::" new-bucket-name)
                                                     :BufferingHints {:IntervalInSeconds 300
                                                                      :SizeInMBs 5}
                                                     :CompressionFormat "UNCOMPRESSED"
                                                     :EncryptionConfiguration {:NoEncryptionConfig "NoEncryption"}
                                                     :Prefix "string"
                                                     :RoleARN "arn:aws:iam::123456789012:role/firehose_delivery_role"}})

;; Put batch of records to stream. Records are converted to instances of ByteBuffer if possible. Sequences are converted to CSV formatted strings for injestion into RedShift.
(fh/put-record-batch cred stream-name [[1 2 3 4]["test" 2 3 4] "\"test\",2,3,4" (ByteBuffer. (.getBytes "test,2,3,4"))])

;; Put individual record to stream.
(fh/put-record stream-name "test")

;; Delete delivery stream
(fh/delete-delivery-stream "stream-name")

KMS

(ns com.example
  (:use [amazonica.aws.kms]))

(create-key)

(list-keys)

(disable-key "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")

Logs

(ns com.example
  (:use [amazonica.aws.logs]))

(describe-log-streams :log-group-name "my-log-group"
                      :order-by "LastEventTime"
                      :descending true)

Lambda

(ns com.example
  (:use [amazonica.aws.lambda]))

(let [role "arn:aws:iam::123456789012:role/some-lambda-role"
      handler "exports.helloWorld = function(event, context) {
                  console.log('value1 = ' + event.key1)
                  console.log('value2 = ' + event.key2)
                  console.log('value3 = ' + event.key3)
                  context.done(null, 'Hello World')
                }"]
  (create-function :role role :function handler))

(invoke :function-name "helloWorld"
        :payload "{\"key1\": 1, \"key2\": 2, \"key3\": 3}")

OpsWorks

(ns com.example
  (:use [amazonica.aws.opsworks]))

 (create-stack :name "my-stack"
               :region "us-east-1"
               :default-os "Ubuntu 12.04 LTS"
               :service-role-arn "arn:aws:iam::676820690883:role/aws-opsworks-service-role")

(create-layer :name "webapp-layer"
              :stack-id "dafa328e-c529-41af-89d3-12840a31abad"
              :enable-auto-healing true
              :auto-assign-elastic-ips true
              :volume-configurations [
                {:mount-point "/data"
                 :number-of-disks 1
                 :size 50}])

(create-instance :hostname "node-app-1"
                 :instance-type "m1.large"
                 :stack-id "dafa328e-c529-41af-89d3-12840a31abad"
                 :layer-ids ["660d00da-c533-43d4-8c7f-2df240fd563f"]
                 :availability-zone "us-east-1a"
                 :autoscaling-type "LoadBasedAutoScaling"
                 :os "Ubuntu 12.04 LTS"
                 :ssh-key-name "admin")

(describe-stacks :stack-ids ["dafa328e-c529-41af-89d3-12840a31abad"])

(describe-layers :stack-id "dafa328e-c529-41af-89d3-12840a31abad")

(describe-instances :stack-id "dafa328e-c529-41af-89d3-12840a31abad"
                    :layer-id "660d00da-c533-43d4-8c7f-2df240fd563f"
                    :instance-id "93bc5049-1bd4-49c8-a6ef-e84145807f71")

(start-stack :stack-id "660d00da-c533-43d4-8c7f-2df240fd563f")

(start-instance :instance-id "93bc5049-1bd4-49c8-a6ef-e84145807f71")

Pinpoint

(ns com.example
  (:require [amazonica.aws.pinpoint :as pp]))

(defn app-id []
  (-> (pp/get-apps {})
      :applications-response
      :item
      first
      :id))

(pp/create-segment {:application-id (app-id)})

(pp/create-campaign
  {:application-id (app-id)
   :write-campaign-request
    {:segment-id "a668b484bec94cb1252772032ecdf540"
     :name "my-campaign"
     :schedule
       {:frequency "ONCE"
        :start-time "2017-09-27T20:36:11+00:00"}
     :message-configuration
       {:default-message
         {:body "hello world"}}}})


(pp/send-messages
  {:application-id (app-id)
   :message-request
     {:addresses {"+18132401139" {:channel-type "SMS"}}
      :message-configuration {:default-message {:body "hello world"}}}})

Redshift

(ns com.example
  (:use [amazonica.aws.redshift]))

(create-cluster :availability-zone "us-east-1a"
                :cluster-type "multi-node"
                :db-name "dw"
                :master-username "scott"
                :master-user-password "tiger"
                :number-of-nodes 3)

Route53

(ns com.example
  (:use [amazonica.aws.route53]))

(create-health-check :health-check-config {:port 80,
                                           :type "HTTP",
                                           :ipaddress "127.0.0.1",
                                           :fully-qualified-domain-name "example.com"})

(get-health-check :health-check-id "ce6a4aeb-acf1-4923-a116-cd9ae2c30ee3")

(create-hosted-zone :name "example69.com"
                    :caller-reference (str (java.util.UUID/randomUUID)))

(get-hosted-zone :id "Z3TKY0VR5CH45U")

(list-hosted-zones)

(list-health-checks)

(list-resource-record-sets :hosted-zone-id "ZN8D0HXQLVRRL")

(delete-health-check :health-check-id "99999999-1234-4923-a116-cd9ae2c30ee3")

(delete-hosted-zone :id "my-bogus-hosted-zone")

Route53Domains

(ns com.example
  (:use [amazonica.aws.route53domains]))

(list-domains)

(check-domain-availability :domain-name "amazon.com")

(check-domain-transferability :domain-name "amazon.com")

(get-domain-detail :domain-name "amazon.com")

(let [contact {:first-name "Michael"
               :last-name "Cohen"
               :organization-name "amazonica"
               :address-line1 "375 11th St"
               :city "San Francisco"
               :state "CA"
               :zip-code "94103-2097"
               :country-code "US"
               :email ""
               :phone-number "+1.4158675309"
               :contact-type "PERSON"}]
  (register-domain :domain-name "amazon.com"
                   :duration-in-years 10
                   :auto-renew true
                   :tech-contact contact
                   :admin-contact contact
                   :registrant-contact contact))

S3

(ns com.example
  (:use [amazonica.aws.s3]
        [amazonica.aws.s3transfer]))

(create-bucket "two-peas")

;; put object with server side encryption
(put-object :bucket-name "two-peas"
            :key "foo"
            :metadata {:server-side-encryption "AES256"}
            :file upload-file)

(copy-object bucket1 "key-1" bucket2 "key-2")

(-> (get-object bucket2 "key-2")
    :input-stream
    slurp)
;; (note that the InputStream returned by GetObject should be closed,
;; e.g. via slurp here, or the HTTP connection pool will be exhausted
;; after several objects are retrieved)

(delete-object :bucket-name "two-peas" :key "foo")

(generate-presigned-url bucket1 "key-1" (-> 6 hours from-now))

(def file "big-file.jar")
(def down-dir (java.io.File. (str "/tmp/" file)))
(def bucket "my-bucket")

;; set S3 Client Options
(s3/list-buckets
  {:client-config {
    :path-style-access-enabled false
    :chunked-encoding-disabled false
    :accelerate-mode-enabled false
    :payload-signing-enabled true
    :dualstack-enabled true
    :force-global-bucket-access-enabled true}})

;; list objects in bucket
(list-objects-v2
  {:bucket-name bucket
   :prefix "keys/start/with/this"  ; optional
   :continuation-token (:next-continuation-token prev-response)})  ; when paging through results


(def key-pair
    (let [kg (KeyPairGenerator/getInstance "RSA")]
      (.initialize kg 1024 (SecureRandom.))
      (.generateKeyPair kg)))

;; put object with client side encryption
(put-object :bucket-name bucket1
            :key "foo"
            :encryption {:key-pair key-pair}
            :file upload-file)

;; get object and decrypt
(get-object :bucket-name bucket1
            :encryption {:key-pair key-pair}
            :key "foo")))))

;; get tags for the bucket
(get-bucket-tagging-configuration {:bucket-name bucket})

;; get just object metadata, e.g. content-length without fetching content:
(get-object-metadata :bucket-name bucket1
                     :key "foo")

;; put object from stream
(def some-bytes (.getBytes "Amazonica" "UTF-8"))
(def input-stream (java.io.ByteArrayInputStream. some-bytes))
(put-object :bucket-name bucket1
            :key "stream"
            :input-stream input-stream
            :metadata {:content-length (count some-bytes)}
            :return-values "ALL_OLD")


(let [upl (upload bucket
                  file
                  down-dir)]
  ((:add-progress-listener upl) #(println %)))

(let [dl  (download bucket
                    file
                    down-dir)
      listener #(if (= :completed (:event %))
                    (println ((:object-metadata dl)))
                    (println %))]
  ((:add-progress-listener dl) listener))


;; setup S3 bucket for static website hosting
(create-bucket bucket-name)

(put-object bucket-name
            "index.html"
            (java.io.File. "index.html"))
(let [policy {:Version "2012-10-17"
              :Statement [{
                :Sid "PublicReadGetObject"
                :Effect "Allow"
                :Principal "*"
                  :Action ["s3:GetObject"]
                  :Resource [(str "arn:aws:s3:::" bucket-name "/*")]}]}
      json (cheshire.core/generate-string policy true)]
  (set-bucket-policy bucket-name json))

(set-bucket-website-configuration
  :bucket-name bucket-name
  :configuration {
    :index-document-suffix "index.html"})

(s3/set-bucket-notification-configuration
  :bucket-name "my.bucket.name"
  :notification-configuration
    {:configurations
      {:some-config-name
        {:queue "arn:aws:sqs:eu-west-1:123456789012:my-sqs-queue-name"
         :events #{"s3:ObjectCreated:*"}
         ;; list of key value pairs as maps or nexted 2 element list
         :filter [{"foo" "bar"}
                  {:baz "quux"}
                  ["key" "value"]]}}})


(s3/set-bucket-tagging-configuration
   :bucket-name "my.bucket.name"
   :tagging-configuration
     {:tag-sets [{:Formation "notlive" :foo "bar" :baz "quux"}]})

SimpleDB

(ns com.example
  (:require [amazonica.aws.simpledb :as sdb]))

(sdb/create-domain :domain-name "domain")

(sdb/list-domains)

(sdb/put-attributes :domain-name "domain"
                    :item-name "my-item"
                    :attributes [{:name "foo"
                                  :value "bar"}
                                 {:name "baz"
                                  :value 42}])

(sdb/select :select-expression
            "select * from `test.domain` where baz = '42' ")

(sdb/delete-domain :domain-name "domain")

SimpleEmail

(ns com.example
  (:require [amazonica.aws.simpleemail :as ses]))

(ses/send-email :destination {:to-addresses ["[email protected]"]}
                :source "[email protected]"
                :message {:subject "Test Subject"
                          :body {:html "testing 1-2-3-4"
                                 :text "testing 1-2-3-4"}})

SimpleSystemsManager

(ns com.example
  (:require [amazonica.aws.simplesystemsmanagement :as ssm]))

(ssm/get-parameter :name "my-param-name")

SimpleWorkflow

(ns com.example
  (:use [amazonica.aws.simpleworkflow]))

(def domain "my-wkfl")
(def version "1.0")

(register-domain :name domain
                 :workflow-execution-retention-period-in-days "30")

(register-activity-type :domain domain
                        :name "my-worflow"
                        :version version)

(register-workflow-type :domain domain
                        :name "my-worflow"
                        :version version)


(deprecate-activity-type :domain domain
                         :activity-type {:name "my-worflow"
                                         :version version})

(deprecate-workflow-type :domain domain
                         :workflowType {:name "my-worflow"
                                        :version version})

(deprecate-domain :name domain)

SNS

(ns com.example
  (:use [amazonica.aws.sns]))

(create-topic :name "my-topic")

(list-topics)

(subscribe :protocol "email"
           :topic-arn "arn:aws:sns:us-east-1:676820690883:my-topic"
           :endpoint "[email protected]")

(subscribe :protocol "lambda"
           :topic-arn "arn:aws:sns:us-east-1:676820690883:my-topic"
           :endpoint "arn:aws:lambda:us-east-1:676820690883:function:my-function")

;; provide endpoint in creds for topics in non-default region
(subscribe {:endpoint "eu-west-1"}
       :protocol "lambda"
           :topic-arn "arn:aws:sns:eu-west-1:676820690883:my-topic"
           :endpoint "arn:aws:lambda:us-east-1:676820690883:function:my-function")

(clojure.pprint/pprint
  (list-subscriptions))

(publish :topic-arn "arn:aws:sns:us-east-1:676820690883:my-topic"
         :subject "test"
         :message (str "Todays is " (java.util.Date.))
         :message-attributes {"attr" "value"})

(unsubscribe :subscription-arn "arn:aws:sns:us-east-1:676820690883:my-topic:33fb2721-b639-419f-9cc3-b4adec0f4eda")

SQS

(ns com.example
  (:use [amazonica.aws.sqs]))

(create-queue :queue-name "my-queue"
              :attributes
                {:VisibilityTimeout 30 ; sec
                 :MaximumMessageSize 65536 ; bytes
                 :MessageRetentionPeriod 1209600 ; sec
                 :ReceiveMessageWaitTimeSeconds 10}) ; sec
;; full list of attributes at
;; http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/sqs/model/GetQueueAttributesRequest.html

(create-queue "DLQ")

(list-queues)

(def queue (find-queue "my-queue"))

(assign-dead-letter-queue
  queue
  (find-queue "DLQ")
  10)

(send-message queue "hello world")

(def msgs (receive-message queue))

(delete-message (-> msgs
                    :messages
                    first
                    (assoc :queue-url queue)))

(receive-message :queue-url queue
                 :wait-time-seconds 6
                 :max-number-of-messages 10
                 :delete true ;; deletes any received messages after receipt
                 :attribute-names ["All"])

(-> "my-queue" find-queue delete-queue)
(-> "DLQ" find-queue delete-queue)

StepFunctions

(ns com.example
  (:use [amazonica.aws.stepfunctions]))

;this is to start the execution, then you need to run get-activity-task-result ultimately to monitor for pending requests from the state machine components
;to execute a worker task.
(start-state-machine "{\"test\":\"test\"}" "arn:aws:states:us-east-1:xxxxxxxxxx:stateMachine:test-sf")

;this will block until it returns a task in the queue from a state machine execution,
;so you need to run it in a while loop on the worker side of your app.
(let [tr (get-activity-task-result "arn:aws:states:us-east-1:xxxxxxxxx:activity:test-sf-activity")
      input (:input tr)
      token (:task-token tr)]
      (if (<validate input here....>)
        (mark-task-success "<json stuff to pipe back into the state machine....>" token)
        (mark-task-failure token))
      )

Acknowledgements

YourKit is kindly supporting the Amazonica open source project with its full-featured Java Profiler. YourKit, LLC is the creator of innovative and intelligent tools for profiling Java and .NET applications. Take a look at YourKit's leading software products: YourKit Java Profiler and YourKit .NET Profiler.

YourKit logo

Analytics

License

Copyright (C) 2013 Michael Cohen

Distributed under the Eclipse Public License, the same as Clojure.

amazonica's People

Contributors

andrioni avatar bonkydog avatar brabster avatar calebmacdonaldblack avatar caryfitzhugh avatar cemerick avatar codahale avatar glangford avatar harold avatar irinarenteria avatar jankronquist avatar jmglov avatar joekiller avatar joelittlejohn avatar jwhitlark avatar limess avatar lischenko avatar lvh avatar mcohen01 avatar micrub avatar neilprosser avatar noprompt avatar paulbutcher avatar r4um avatar shevchuk avatar stevensurgnier avatar tcoupland avatar trgoodwin avatar vemv avatar weavenet avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

amazonica's Issues

Bug in progress-listener?

I ran into something that I think might be a bug, and am interested to know if you have any ideas: https://gist.github.com/anonymous/d4dad5ff47ae7e92f7a5

The problem is that the pending atom always winds up as -1.

I thought this was odd, and so I tested it a bit in the REPL, and it appears that after eval'ing the buffer containing this code in emacs and running a single upload, :started never is not seen by the progress-listener. This only happens the first time the upload function is called.

Once a single upload has been processed, all future uploads seem to see the :started event, and inc the pending atom accordingly.

Any thoughts?

Thanks

Best method not found when passing credentials to a function

Using Amazonica 0.1.21.

I'm not sure if this is an issue, or I'm just using the library awkwardly.

If I put my Amazon credentials into environment variable the following works fine:

(use 'amazonica.aws.s3)
(get-object-metadata {:bucket-name "ahjones-test" :key "foo"})

However, if I want to pass in credentials as the first parameter I get a message that says that the best method can't be found.

(def cred {:access-key "key" :access-secret "secret"})
(get-object-metadata cred {:bucket-name "ahjones-test" :key "foo"})

The exception:

IllegalArgumentException Could not determine best method to invoke for get-object-metadata using arguments ({:secret-key "secret", :access-key "key"} {:key "foo", :bucket-name "ahjones-test"})
    amazonica.core/intern-function/fn--1458 (core.clj:705)
    user/eval1649 (form-init5154478138353900686.clj:1)
    clojure.lang.Compiler.eval (Compiler.java:6619)
    clojure.lang.Compiler.eval (Compiler.java:6582)
    clojure.core/eval (core.clj:2852)
    clojure.main/repl/read-eval-print--6588/fn--6591 (main.clj:259)
    clojure.main/repl/read-eval-print--6588 (main.clj:259)
    clojure.main/repl/fn--6597 (main.clj:277)
    clojure.main/repl (main.clj:277)
    clojure.tools.nrepl.middleware.interruptible-eval/evaluate/fn--591 (interruptible_eval.clj:56)
    clojure.core/apply (core.clj:617)
    clojure.core/with-bindings* (core.clj:1788)

However this is OK

(get-object-metadata cred :bucket-name "ahjones-test" :key "foo")

Coercion Error on Kinesis Data

I am attempting to test kinesis via:

(put-record cred "beatport-api-test" new-event event-key)

Both new-event and event-key are java.lang.String. I am getting the following error:

Caused by: java.lang.IllegalArgumentException: No coercion is available to turn {"response":{"status":200,"headers":{"link":"</search?q=hee&group-by=kind&page=1>"},"body":{"list":[],"track":[],"release":[],"mix":[],"genre":[],"account":[],"best-match":null}},"ip":"127.0.0.1","user-agent":"","method":"get","events":"","duration":"369ms","http-server":"org.eclipse.jetty.server.HttpInput@3b702644","function-times":{},"id":"2014-07-16-api-usw1a-001-00000000","action":"Request","time-unix":1405563461,"uri":"/search","user":"newport"} into an object of type class java.nio.ByteBuffer

(Where new-event is the {"response ... newport"} portion, a simulated log event encoded as json, which will need be processed as text by the consumer.)

Hopefully, there is something simple that I am missing. But I have reviewed the Readme, and it seems the next troubleshooting step would be to coerce new-event to java.nio.ByteBuffer myself. Any help is appreciated. Thanks.

Amazonica prints the AWS credentials when creating a client

When running the first SimpleDB select query, the library prints the credentials (secret and all):

user> (select cred :select-expression "select count(*) from `Users`")
#<BasicAWSCredentials com.amazonaws.auth.BasicAWSCredentials@72cb56cd>
{:class com.amazonaws.auth.BasicAWSCredentials, :AWSSecretKey XXXXXXXXXXXXXX :AWSAccessKeyId XXXXXXXXXXXXXX}
{:items [{:name "Domain", :attributes [{:name "Count", :value "257"}]}]}

The middle two lines are printed, while the last is the returned value.

Subsequent requests don't print them.

with-credentials throws exception

Hey guys, I'm getting an exception using with-credentials:

clojure.lang.ArityException: Wrong number of args (2) passed to: core$amazon-client-STAR-

Here's the code:

(ns paddleguru.util.aws
  (:require [environ.core :refer [env]]
            [amazonica.core :refer [with-credential]]
            [amazonica.aws.s3 :as s3]
            [amazonica.aws.s3transfer :as s3t]))

(with-credential [(:aws-access-key-id env)
                  (:aws-access-key-secret env)
                  "us-west-1"]
  (s3/list-buckets))

The same code works great without the wrapping with-credential.

Java 1.6 Clojar release

Hi,

I would like to run amazonica against Java 1.6.
The latest Clojar release has been compiled with Java 1.7.
I'm not sure if this is due to the Java Amazon bindings enforce this.

Is it is possible, could we get a release that works with Java 1.6?

Thanks,

Getting "Unable to execute HTTP request: peer not authenticated" exception

Any idea why I'm getting this exception? :-)

Jul 29, 2013 3:02:06 PM com.amazonaws.http.AmazonHttpClient executeHelper
INFO: Unable to execute HTTP request: peer not authenticated
javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated
at sun.security.ssl.SSLSessionImpl.getPeerCertificates(SSLSessionImpl.java:397)
at org.apache.http.conn.ssl.AbstractVerifier.verify(AbstractVerifier.java:128)
at org.apache.http.conn.ssl.SSLSocketFactory.connectSocket(SSLSocketFactory.java:572)
at org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:180)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:294)
at org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:641)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:480)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:784)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:315)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:199)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:2994)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:800)
at com.amazonaws.services.s3.AmazonS3Client.getObjectMetadata(AmazonS3Client.java:780)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:613)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:613)
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
at amazonica.core$fn_call$fn__7047.invoke(core.clj:589)
at amazonica.core$intern_function$fn__7059.doInvoke(core.clj:629)
at clojure.lang.RestFn.invoke(RestFn.java:436)
...
SSLPeerUnverifiedException peer not authenticated sun.security.ssl.SSLSessionImpl.getPeerCertificates

no matching method on transferManager

One more for you:

(s3t/upload (bucket-name)
              (UUID/randomUUID)
              (file "/Users/sritchie/Desktop/20131009-TWITTER-ENGINEERS-016edit-660x824.jpg"))

;; CompilerException java.lang.IllegalArgumentException: No matching method found: setRegion for class com.amazonaws.services.s3.transfer.TransferManager, compiling:(form-init6963743079950413614.clj:2:3) 

with-credential destructuring problem

hi,

I came across this problem with the macro with-credential

(defmacro with-credential
  "Per invocation binding of credentials for ad-hoc
  service calls using alternate user/password combos
  (and endpoints)."
  [[a b c] & body]
  `(binding [*credentials* ~(keys->cred a b c)]
    (do ~@body)))

In this macro you destructure the [a b c] as a vector,
and this works fine when used as

(with-credential ["a" "b" "c"]
     (comment "foo"))

however if you try to use in this context

(defn get-my-credentials []
   ["a" "b" "c"])

(with-credential (get-my-credentials)
     (comment "foo"))

it is going to fail because the macro will try to destructure the (get-my-credentials)
as a list with one element instead of evaluate the function call.

This is the macroexpand result.

(clojure.core/binding [amazonica.core/*credentials* {:access-key
                                                     get-my-credentials,
                                                     :secret-key nil}]
  (do (comment "foo")))

To fix this I suggest to remove the destructuring of the vector in the macro,
and call the keys->cred on the unquote (~) of the credential argument.

best regards
Bruno

Premature end of file while uploading to S3

I'm calling:

(amazonica.aws.s3/put-object bucket-name key-name (io/input-stream a-file) {:some :metadata} )

com.amazonaws.AmazonClientException: Unable to unmarshall error response (Premature end of file.). Response Code: 400, Response Text: Bad Request
    at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:792)
    at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3566)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1434)
    at com.amazonaws.services.s3.AmazonS3Client.putObject(AmazonS3Client.java:1275)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93)
    at clojure.lang.Reflector.invokeInstanceMethod(Reflector.java:28)
    at amazonica.core$fn_call$fn__1614.invoke(core.clj:718)
    at amazonica.core$intern_function$fn__1633.doInvoke(core.clj:769)
    at clojure.lang.RestFn.invoke(RestFn.java:457)

I'm pretty sure this is caused by the S3Client being GC'd before the file is finished uploading, in amazonica.core/fn-call.

References:

Docs and examples

I think this great library needs some more documentation and examples. E.g. I'm trying (and failing) to write a correct amazonica.aws.dynamodbv2/batch-write-item request for an hour or so already. And the fact that the API is automatically generated from AWS API makes it harder (at least for me) to understand how to do it.

I'd be happy to help with the stuff I'm using (mainly dynamodb and simpledb) but first I have to understand it myself :)

Use Amazonica through an HTTP proxy

I'd like to be able to use Amazonica through an HTTP proxy. The Amazon*Clients support using a proxy by passing a suitable configured ClientConfiguration object to their constructors.

I'm not sure what the best interface would be. Perhaps it'd be easiest to read system properties and environment variables to find http proxy configuration, but I'm not sure that's the best solution.

An alternative would be to create a with-configuration macro that sets a dynamic var with config options.

Is this something that you'd consider adding to Amazonica? I'd be happy to come up with a pull request if you are interested.

IAM and STS

Does amazonica support IAM and STS services as well?

Trouble deleting a dynamodb item

Hello!

I'm having some issues when deleting a DynamoDB item using amazonica.aws.dynamodbv2.

An example:

(delete-item :table-name "my-table" :key {:id {:s "12345"}})

This fails with a java.lang.IllegalArgumentException: null exception.

Any ideas here? I am wracking my brain trying to understand what I am doing wrong. Thanks!

Enhancement idea - Kinesis and core.async

Background

There would be a number of benefits for some Clojure apps if a Kinesis shard could be presented as a Clojure core.async channel. Delivering shards as channels would create new options for Clojure stream consumers, beyond the limited Kinesis notion of worker and record processor and the bandwidth and other limits applied to shards.

Here is an idea for one method of doing this in Amazonica, in case it is useful.

(Disclaimer - This is a rough sketch based on an inexpert read of the AWS documentation and what I understand so far of core.async.)

Step 1 - Basic Kinesis shard->core.async channel

  • A Kinesis consumer app using core.async creates a buffered shard channel and passes it to Amazonica, along with the usual starting parameters (similar to calling worker in kinesis.clj).
  • a modified record processor (similar to processRecords in processor-factory) performs blocking writes to the channel for each Kinesis record (rather than calling a processor function for each record)
  • the core.async consumer app reads from the channel to process the shard data

This of course is insufficient on its own - by simply dumping shard records on a channel, we have lost the ability to know when each record is "done". We don't know when the records will be read from the channel, or when they will be processed. So a new mechanism is required to restore the ability to checkpoint sequence numbers in the shard.

Step 2 - Checkpointing

Only the core.async app knows when a record is really done; one method to communicate "doneness" back to Amazonica is by using another channel. The app can write to this channel to send completed sequence numbers back to Amazonica.

  • Amazonica creates a checkpoint channel, for example (chan (sliding-buffer 1)). A sliding buffer channel will drop oldest values; in this case only the latest put survives.
  • whenever the data for a sequence number is fully processed (or whenever it chooses), the app puts the sequence number on the checkpoint channel; it should never block, and Amazonica is only ever interested in the latest value

Step 3 - Complete version with checkpointing

To combine the previous steps, a variant of processor-factory does the following:

  1. startup - app creates buffered shard channel and passes to Amazonica. Amazonica creates and returns checkpoint channel (which app can use as it sees fit, or ignore)
  2. receive records pushed from Kinesis (as usual) and do a blocking write for each record to the shard channel
  3. if a sufficient time interval has passed since the last checkpoint, do a zero timeout get from the checkpoint channel; if a value is available, call the Kinesis checkpoint(String sequenceNumber) variant of IRecordProcessorCheckpointer
  4. repeat from step 2

For step 3: note that checkpoint() (no arguments) currently used in kinesis.clj checkpoints the progress at the last record that was delivered to the record processor; with channels, we want to checkpoint a specific sequence number. This capability is added in the Kinesis Client Library version 1.1.

One way (not sure if this is idiomatic) to get the latest value from the checkpoint channel, without waiting if nothing is available:

(alts!! [checkpoint-channel (timeout 0)] :priority true)

If not nil, the returned value is checkpointed.

See also:

https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/interfaces/IRecordProcessorCheckpointer.java

https://forums.aws.amazon.com/message.jspa?messageID=531052

Kinesis checkpointing and KinesisClientLibDependencyException (Kinesis Client 1.1.0)

One of the possible exceptions thrown by IRecordProcessorCheckpointer checkpoint() is KinesisClientLibDependencyException according to

https://github.com/awslabs/amazon-kinesis-client/blob/master/src/main/java/com/amazonaws/services/kinesis/clientlibrary/interfaces/IRecordProcessorCheckpointer.java

(NB. This is version 1.1.0 of the Kinesis client library - so possibly the exception is new).

This exception is not currently handled in kinesis.clj mark-checkpoint. The Amazon comments suggest "...the application can backoff and retry."

Should KinesisClientLibDependencyException be handled similar to ThrottlingException?

project.clj says
[com.amazonaws/amazon-kinesis-client "1.0.0"]

So perhaps this doesn't apply just yet.

DynamoDB client always talks to "dynamodb.us-east-1.amazonaws.com"

Is there any way to change the region endpoint for DynamoDB client? The following code creates table "TestTable" in the US East region, despite I set :endpoint to "eu-west-1":

(def cred {:access-key "aws-access-key"
           :secret-key "aws-secret-key"
           :endpoint "eu-west-1"
           :client-config {:proxy-host "my-proxy"
                           :proxy-port 8080}})

(create-table cred :table-name "TestTable"
  ; ....
  )

amazonica 0.1.22

Default checkpoint interval

worker in kinesis.clj has the code

:or {checkpoint 60000 ...

I'm not clear on exactly what's happening with opts, but is there a mismatched units bug here since processor-factory multiplies checkpoint by 1000 (presumably to convert from seconds to milliseconds) ?

(reset! next-check (+' (System/currentTimeMillis) (*' 1000 checkpoint))))

Authenticating with IAM Roles?

I'd like to be able to use amazonica in applications running on EC2 instances. Instead of storing the credentials in the code or a config file, I'd like to use an IAM role on the instance to authenticate to the API.

The Java SDK developer guide describes how the SDK will do this:

If your application software constructs a client object for an AWS service using an overload of the constructor that does not take any parameters, the constructor searches the "credentials provider chain." The credentials provider chain is the set of places where the constructor attempts to find credentials if they are not specified explicitly as parameters. For Java, the credentials provider chain is:

  • Environment Variables: AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
  • Java System Properties: aws.accessKeyId and aws.secretKey
  • Instance Metadata Service, which provides the credentials associated with the IAM role for the EC2 instance

I did some experimenting with this yesterday and it wasn't obvious to me. Is it possible to use IAM roles with amazonica?

Thanks,
Dave

Kinesis get-records Testing Issue

(Thanks for resolving the put-record issue.)

I am now having an issue consuming the records written to Kinesis. This command:

(kinesis/get-records
  cred
  :shard-iterator shard-iterator
  :limit batch-limit)

Results in this error:

Exception in thread "main" java.lang.IllegalArgumentException: No value supplied for key: 2, compiling:(/private/var/folders/g8/1b2_h6yx7t7csbtr7g4x9qvh0000gn/T/form-init4194254269444512279.clj:1:142)

Where batch-limit is set to "2". When I removed :limit, the same error cited key as the shard-iterator value, so it seems like an argument counting/position issue. Thanks.

How to correctly invoke withRange from the S3 getObject call?

I want to use the range feature of the AWS SDK, but I'm not sure if this is supported by Amazonica as it is an option that requires two arguments instead of one.

I tried the following:

(s3/get-object :bucket-name "my-bucket" :key "my-key" :range 1000)
; IllegalArgumentException wrong number of arguments
; sun.reflect.NativeMethodAccessorImpl.invoke0 (NativeMethodAccessorImpl.java:-2)
; sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:39)
; sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
; java.lang.reflect.Method.invoke (Method.java:597)
; sun.reflect.GeneratedMethodAccessor15.invoke (:-1)
; sun.reflect.DelegatingMethodAccessorImpl.invoke (DelegatingMethodAccessorImpl.java:25)
; java.lang.reflect.Method.invoke (Method.java:597)
; clojure.lang.Reflector.invokeMatchingMethod (Reflector.java:93)
; clojure.lang.Reflector.invokeInstanceMethod (Reflector.java:28)
; amazonica.core/invoke (core.clj:444)

(s3/get-object :bucket-name "my-bucket" :key "my-key" :range [0 1000])
;; Same error

S3 client get-object-acl(String, String, CannedAccessControlList) does not work

I'm trying to invoke the following AmazonS3Client method :

http://amzn.to/1hDfA05

public void setObjectAcl(String bucketName, String key,CannedAccessControlList acl)

Here is how I'm invoking it in my code (where creds is a map with my aws credentials and client config):

(require '[amazonica.aws.s3 :as amazonica-s3])
(import com.amazonaws.services.s3.model.CannedAccessControlList)
(amazonica-s3/set-object-acl creds "com.test.bucket" "test/key" CannedAccessControlList/PublicRead)

IllegalArgumentException Don't know how to create ISeq from: com.amazonaws.services.s3.model.CannedAccessControlList  clojure.lang.RT.seqFrom (RT.java:494)

I have a feeling that its trying to invoke a similar method in the s3 client with the following signature instead:

public void setObjectAcl(String bucketName, String key, AccessControlList acl)

Does amazonica API match the underlying client API by type of args or just the count of the args?

PoolingClientConnectionManager class def not found

I'm cleaning up a lein project's deps, and after shuffling around some namespaces and updating the amazonica project.clj dependency to use 0.1.22, I'm getting a new error when calling s3/get-object:

java.lang.NoClassDefFoundError: org/apache/http/impl/conn/PoolingClientConnectionManager
 at com.amazonaws.http.ConnectionManagerFactory.createPoolingClientConnManager (ConnectionManagerFactory.java:26)
    com.amazonaws.http.HttpClientFactory.createHttpClient (HttpClientFactory.java:87)
    com.amazonaws.http.AmazonHttpClient.<init> (AmazonHttpClient.java:121)
    com.amazonaws.AmazonWebServiceClient.<init> (AmazonWebServiceClient.java:66)
    com.amazonaws.services.s3.AmazonS3Client.<init> (AmazonS3Client.java:304)
    com.amazonaws.services.s3.AmazonS3Client.<init> (AmazonS3Client.java:286)
    sun.reflect.NativeConstructorAccessorImpl.newInstance0 (NativeConstructorAccessorImpl.java:-2)
    sun.reflect.NativeConstructorAccessorImpl.newInstance (NativeConstructorAccessorImpl.java:57)
    sun.reflect.DelegatingConstructorAccessorImpl.newInstance (DelegatingConstructorAccessorImpl.java:45)
    java.lang.reflect.Constructor.newInstance (Constructor.java:526)
    clojure.lang.Reflector.invokeConstructor (Reflector.java:180)
    amazonica.core$create_client.invoke (core.clj:139)
    amazonica.core$amazon_client_STAR_.invoke (core.clj:187)
    clojure.lang.AFn.applyToHelper (AFn.java:167)
    clojure.lang.AFn.applyTo (AFn.java:151)
    clojure.core$apply.invoke (core.clj:617)
    clojure.core$memoize$fn__5049.doInvoke (core.clj:5735)
    clojure.lang.RestFn.invoke (RestFn.java:436)
    amazonica.core$candidate_client.invoke (core.clj:661)
    amazonica.core$fn_call$fn__1670.invoke (core.clj:671)
    clojure.lang.Delay.deref (Delay.java:33)
    clojure.core$deref.invoke (core.clj:2128)
    amazonica.core$fn_call$fn__1672.invoke (core.clj:674)
    amazonica.core$intern_function$fn__1687.doInvoke (core.clj:718)
.
.
.

I tried resetting my amazonica dependency back to the previous version 0.1.15, but I'm still getting the error. It looks like the aws-sdk jar updated the ConnectionManagerFactory code when it bumped to version 1.5.0, which amazonica was including a bit before version 0.1.15 if I'm not mistaken.

Any idea why this is happening now? As part of my project refactoring, I removed some dependencies from the project. Is there any reason why amazonica would be dependent on any other libraries being present in the project?

No method in multimethod 'fmap' for dispatch value: class clojure.lang.PersistentVector$ChunkedSeq

I'm getting this error when attempting to create a launch configuration. You can recreate the error by doing the following:

(use 'amazonica.aws.autoscaling)
(create-launch-configuration :security-groups (seq ["hello" "world"]))

... results in...

IllegalArgumentException No method in multimethod 'fmap' for dispatch value: class clojure.lang.PersistentVector$ChunkedSeq  clojure.lang.MultiFn.getFn (MultiFn.java:160)

In my case the contents of my :security groups param have been generated by map. This also happens when using create-auto-scaling-group so I have a feeling it's likely to happen in a lot of places.

I'm using Clojure 1.6.0 but have tried with 1.5.1 as well. The stack trace points to the error occurring in org.clojure/algo.generic so I tried updating that from 0.1.0 to 0.1.2 but that didn't help.

I can eliminate the problem by wrapping my sequence in vec but that's not an ideal solution. Am I doing something wrong here or is this something Amazonica could help with?

Credentials chain should include ~/.aws/credentials

Amazonica's section on auth says:

The default authentication scheme is to use the chained Provider class from the AWS SDK, whereby authentication is attempted in the following order:

  • Environment Variables - AWS_ACCESS_KEY_ID and AWS_SECRET_KEY
  • Java System Properties - aws.accessKeyId and aws.secretKey
  • Instance profile credentials delivered through the Amazon EC2 metadata service

Only listing 3 options, but the AWS Default chain (http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/auth/DefaultAWSCredentialsProviderChain.html) actually lists 4. The missing one is:

  • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI

This isn't just a documentation discrepancy, it appears that Amazonica really doesn't look for this file in it's Chain.

Error messages aren't helpful

As an example, see #60.

I don't have experience with reflection in Java, but it seems plausible to me that unhelpful error messages would just come with the territory. If so, feel free to just close this issue as "Won't fix" or something.

P.S. I mean this more as an FYI than as a complaint

How to set region when using IAM profile credentials?

It's not at all obvious to me from the docs or code how to accomplish this. Using ENV Vars I do this:

(def aws_access_key_id 
  (.getAWSAccessKeyId 
    (.getCredentials (amazonica.core/get-credentials :cred))))
(def aws_secret_key 
  (.getAWSSecretKey 
;   (.getCredentials (amazonica.core/get-credentials :cred))))

(defcredential aws_access_key_id aws_secret_key (:region options))

However, if this happens while on an instance with IAM profile, an error is thrown about a missing security token. Is there some simple alternative that I'm missing?

The tests don't run

I've tried to run the tests, but I get an exception parse-args already refers to: #'amazonica.aws.glacier/parse-args in namespace: amazonica.test.core

I ran the tests using lein test.

The full stack trace is:

Exception in thread "main" java.lang.IllegalStateException: parse-args already refers to: #'amazonica.aws.glacier/parse-args in namespace: amazonica.test.core
        at clojure.lang.Namespace.warnOrFailOnReplace(Namespace.java:88)
        at clojure.lang.Namespace.reference(Namespace.java:110)
        at clojure.lang.Namespace.refer(Namespace.java:168)
        at clojure.core$refer.doInvoke(core.clj:3850)
        at clojure.lang.RestFn.invoke(RestFn.java:410)
        at clojure.lang.AFn.applyToHelper(AFn.java:161)
        at clojure.lang.RestFn.applyTo(RestFn.java:132)
        at clojure.core$apply.invoke(core.clj:619)
        at clojure.core$load_lib.doInvoke(core.clj:5394)
        at clojure.lang.RestFn.applyTo(RestFn.java:142)
        at clojure.core$apply.invoke(core.clj:619)
        at clojure.core$load_libs.doInvoke(core.clj:5417)
        at clojure.lang.RestFn.applyTo(RestFn.java:137)
        at clojure.core$apply.invoke(core.clj:621)
        at clojure.core$use.doInvoke(core.clj:5507)
        at clojure.lang.RestFn.invoke(RestFn.java:1789)
        at amazonica.test.core$eval496$loading__4910__auto____497.invoke(core.clj:1)
        at amazonica.test.core$eval496.invoke(core.clj:1)
        at clojure.lang.Compiler.eval(Compiler.java:6619)
        at clojure.lang.Compiler.eval(Compiler.java:6608)
        at clojure.lang.Compiler.load(Compiler.java:7064)
        at clojure.lang.RT.loadResourceScript(RT.java:370)
        at clojure.lang.RT.loadResourceScript(RT.java:361)
        at clojure.lang.RT.load(RT.java:440)
        at clojure.lang.RT.load(RT.java:411)
        at clojure.core$load$fn__5018.invoke(core.clj:5530)
        at clojure.core$load.doInvoke(core.clj:5529)
        at clojure.lang.RestFn.invoke(RestFn.java:408)
        at clojure.core$load_one.invoke(core.clj:5336)
        at clojure.core$load_lib$fn__4967.invoke(core.clj:5375)
        at clojure.core$load_lib.doInvoke(core.clj:5374)
        at clojure.lang.RestFn.applyTo(RestFn.java:142)
        at clojure.core$apply.invoke(core.clj:619)
        at clojure.core$load_libs.doInvoke(core.clj:5413)
        at clojure.lang.RestFn.applyTo(RestFn.java:137)
        at clojure.core$apply.invoke(core.clj:619)
        at clojure.core$require.doInvoke(core.clj:5496)
        at clojure.lang.RestFn.applyTo(RestFn.java:137)
        at clojure.core$apply.invoke(core.clj:619)
        at user$eval85.invoke(form-init8712192415606825579.clj:1)
        at clojure.lang.Compiler.eval(Compiler.java:6619)
        at clojure.lang.Compiler.eval(Compiler.java:6609)
        at clojure.lang.Compiler.load(Compiler.java:7064)
        at clojure.lang.Compiler.loadFile(Compiler.java:7020)
        at clojure.main$load_script.invoke(main.clj:294)
        at clojure.main$init_opt.invoke(main.clj:299)
        at clojure.main$initialize.invoke(main.clj:327)
        at clojure.main$null_opt.invoke(main.clj:362)
        at clojure.main$main.doInvoke(main.clj:440)
        at clojure.lang.RestFn.invoke(RestFn.java:421)
        at clojure.lang.Var.invoke(Var.java:419)
        at clojure.lang.AFn.applyToHelper(AFn.java:163)
        at clojure.lang.Var.applyTo(Var.java:532)
        at clojure.main.main(main.java:37)
Tests failed.

Can't get S3 deleteObjects to work

I'm having trouble getting delete-objects to work. For delete-object (singular), the following seems to work:

(delete-object {:bucket-name "my-bucket" :key "key1"})

So by analogy between DeleteObjectRequest and DeleteObjectsRequest, I expected this to work:

(delete-objects {:bucket-name "my-bucket" :keys ["key1" "key2"]})

but instead I get:

UnsupportedOperationException: nth not supported on this type: Character
    at clojure.lang.RT.nthFrom(RT.java:857)

Am I wrong to expect delete-objects to work this way?

Add CloudSearchDomain

Only the administrative functions for CloudSearchV2 are exposed. I would like to use the search functions in CloudSearchDomain to actually build and execute a query. Is it possible to add these?

SimpleDB functionality doesn't work

I know it's probably not a surprise that SimpleDB doesn't work —given that you don't include a MVS in the readme— but I figured it might be useful for you to track the task in an issue anyway.

(require '[amazonica.core :refer [defcredentials])
(defcredentials "AccessKeyID" "SecretKey")
(require '[amazonica.aws.simpledb :as sdb])
(sdb/put-attributes "domain"
                    "devapiTue Aug 05 23:24:36 UTC 2014"
                    [{:name "args", :value ["Test message"]}
                     {:name "instant", :value #inst "2014-08-05T23:24:36.701-00:00"}
                     {:name "ns", :value "some-api.main"}
                     {:name "file", :value "/tmp/form-init6934682238825582339.clj"}
                     {:name "hostname", :value "devapi"}
                     {:name "output", :value "2014-Aug-05 23:24:36 +0000 devapi WARN [some-api.main] - Test message"}
                     {:name "prefix", :value "2014-Aug-05 23:24:36 +0000 devapi WARN [some-api.main]"}
                     {:name "level", :value :warn}
                     {:name "line", :value nil}
                     {:name "ap-config", :value {}}
                     {:name "error?", :value false}
                     {:name "throwable", :value nil}
                     {:name "timestamp", :value "2014-Aug-05 23:24:36 +0000"}
                     {:name "message", :value "Test message"}])

And the error is:

IllegalArgumentException Could not determine best method to invoke for put-attributes using arguments ("spike_for_logging" "devapiTue Aug 05 23:25:28 UTC 2014" ({:name "args", :value "[\"Test message\"]"} {:name "instant", :value "Tue Aug 05 23:25:28 UTC 2014"} {:name "ns", :value "some-api.main"} {:name "file", :value "/tmp/form-init6934682238825582339.clj"} {:name "hostname", :value "devapi"} {:name "output", :value "2014-Aug-05 23:25:28 +0000 devapi WARN [some-api.main] - Test message"} {:name "prefix", :value "2014-Aug-05 23:25:28 +0000 devapi WARN [some-api.main]"} {:name "level", :value ":warn"} {:name "line", :value ""} {:name "ap-config", :value "{}"} {:name "error?", :value "false"} {:name "throwable", :value ""} {:name "timestamp", :value "2014-Aug-05 23:25:28 +0000"} {:name "message", :value "Test message"}))  amazonica.core/intern-function/fn--11248 (core.clj:780)

s3 user-metadata is inconsistent

So Amazonica is a really slick library, particularly the bean-to-Clojure map mapping. I'm writing a s3sync utility that I hope will be more flexible/functional than the commonly used s3sync.rb. Being able to express things as Clojure maps makes things work really well, but user-metadata requires a workaround if you're updating metadata that you pulled from s3.

In particular, :user-metadata is a keyword -> String mapping when downloaded, but must be a String -> String mapping when uploaded.

Example below

user=> (amazonica.aws.s3/get-object-metadata my-credentials my-website "404.html")
{:content-length 6452, :last-modified #<DateTime 2013-04-04T13:21:30.000-05:00>, :content-type "text/html", :raw-metadata {:Content-Type "text/html", :Accept-Ranges "bytes", :Last-Modified #<DateTime 2013-04-04T13:21:30.000-05:00>, :Content-Length 6452, :ETag "2d477e36be6f149b4c559591a6201774"}, :etag "2d477e36be6f149b4c559591a6201774", :user-metadata {:foo "bar"}}

user=> (amazonica.aws.s3/copy-object my-credentials :source-bucket-name my-website :destination-bucket-name my-website :source-key "404.html" :destination-key "404.html" :new-object-metadata {:content-type "text/html" :user-metadata {:foo "bar"}})
ClassCastException clojure.lang.Keyword cannot be cast to java.lang.String  com.amazonaws.services.s3.AmazonS3Client.populateRequestMetadata (AmazonS3Client.java:2634)

user=> (amazonica.aws.s3/copy-object my-credentials :source-bucket-name my-website :destination-bucket-name my-website :source-key "404.html" :destination-key "404.html" :new-object-metadata {:content-type "text/html" :user-metadata {"foo" "bar"}})
{:etag "2d477e36be6f149b4c559591a6201774", :last-modified-date #<DateTime 2013-04-04T13:21:58.000-05:00>}

S3: issues with put-object and an input stream

I'm attempting to do this, which seems to fit the Java SDK spec:

(let [bytes (.getBytes "foo")]
  (put-object
   :bucket-name "my-bucket"
   :key "foo"
   :input (ByteArrayInputStream. bytes)
   :metadata {:content-length (count bytes)
              :conten-type "text/plan"}))

However I receive a cryptic error and cannot figure out how to debug it:

java.lang.NullPointerException: null
         AmazonS3Client.java:1130 com.amazonaws.services.s3.AmazonS3Client.putObject
                 (Unknown Source) sun.reflect.GeneratedMethodAccessor25.invoke
DelegatingMethodAccessorImpl.java:43 sun.reflect.DelegatingMethodAccessorImpl.invoke
                  Method.java:601 java.lang.reflect.Method.invoke
                 (Unknown Source) sun.reflect.GeneratedMethodAccessor8.invoke
DelegatingMethodAccessorImpl.java:43 sun.reflect.DelegatingMethodAccessorImpl.invoke
                  Method.java:601 java.lang.reflect.Method.invoke
                Reflector.java:93 clojure.lang.Reflector.invokeMatchingMethod
                Reflector.java:28 clojure.lang.Reflector.invokeInstanceMethod
                     core.clj:589 amazonica.core/fn-call[fn]
                     core.clj:629 amazonica.core/intern-function[fn]
                  RestFn.java:619 clojure.lang.RestFn.invoke
               NO_SOURCE_FILE:114 canary.sensor/eval5799
               Compiler.java:6619 clojure.lang.Compiler.eval
               Compiler.java:6582 clojure.lang.Compiler.eval
                    core.clj:2852 clojure.core/eval9 lighttable.hub.clj.eval/->result
                     AFn.java:163 clojure.lang.AFn.applyToHelper
                     AFn.java:151 clojure.lang.AFn.applyTo
                     core.clj:619 clojure.core/apply
                    core.clj:2396 clojure.core/partial[fn]
                  RestFn.java:408 clojure.lang.RestFn.invoke
                    core.clj:2485 clojure.core/map[fn]
                  LazySeq.java:42 clojure.lang.LazySeq.sval
                  LazySeq.java:60 clojure.lang.LazySeq.seq
                      RT.java:484 clojure.lang.RT.seq
                     core.clj:133 clojure.core/seq
                    core.clj:2523 clojure.core/filter[fn]
                  LazySeq.java:42 clojure.lang.LazySeq.sval
                  LazySeq.java:60 clojure.lang.LazySeq.seq
                      RT.java:484 clojure.lang.RT.seq
                     core.clj:133 clojure.core/seq
                    core.clj:2780 clojure.core/dorun
                    core.clj:2796 clojure.core/doall
                     eval.clj:150 lighttable.hub.clj.eval/eval-clj[fn]
                    core.clj:1836 clojure.core/binding-conveyor-fn[fn]
                      AFn.java:18 clojure.lang.AFn.call
              FutureTask.java:334 java.util.concurrent.FutureTask$Sync.innerRun
              FutureTask.java:166 java.util.concurrent.FutureTask.run
     ThreadPoolExecutor.java:1145 java.util.concurrent.ThreadPoolExecutor.runWorker
      ThreadPoolExecutor.java:615 java.util.concurrent.ThreadPoolExecutor$Worker.run
                  Thread.java:722 java.lang.Thread.run

I notice the the :file approach to uploading seems to work well, but I need to upload an input stream. I'm mostly trying to understand if this is on my end or something deeper.

Any pointers?

Example using cloudformation

Please since I couldn't figure out where else to make this request I'm using this medium. Please do you have any examples of using cloudformation from amazonica. I'm especially having diffuiculty knowing how to map the exmaple from the amazon website to amazonica. I can't find how to specify an amazon template to amazonica's cloudformation api. Yes I also can't figure the same for the Amazon client .

I'll appreciate any help with this.

SQS: receive-message throws IllegalArgumentException

When I call (receive-message :queue-url "https://path.to.queue") I get an exception:

IllegalArgumentException No value supplied for key: https://path.to.queue
    clojure.lang.PersistentHashMap.create (PersistentHashMap.java:77)
    clojure.core/hash-map (core.clj:365)
    clojure.core/apply (core.clj:617)
    amazonica.aws.sqs/delete-on-receive (sqs.clj:34)
    clojure.lang.Var.invoke (Var.java:423)
    clojure.lang.Var.applyTo (Var.java:532)
    clojure.core/apply (core.clj:619)
    robert.hooke/compose-hooks/fn--1482 (hooke.clj:40)
    clojure.core/apply (core.clj:617)
    robert.hooke/run-hooks (hooke.clj:46)
    robert.hooke/prepare-for-hooks/fn--1487/fn--1488 (hooke.clj:54)
    clojure.lang.AFunction$1.doInvoke (AFunction.java:29)

Line 34 of delete-on-receive is a hook that deletes messages when they've been received if the :delete option is present.

It's possible to work around the bug with the following:

(use 'robert.hooke)
(with-hooks-disabled receive-message
    (receive-message :queue-url "https://path.to.queue"))

but of course you can't use the delete-on-receive functionality if you do this.

S3 `get-object` call doesn't coerce correctly

With amazonica 0.1.3 running

(get-object db-creds
            :bucket-name "db-bucket"
            :key "db/foo.txt")

doesn't work (invalid arity exception) but

(get-object db-creds
             "db-bucket"
             "db/foo.txt")

works.
Looks like the two-arity string string Java method is getting called in favor of the one-arity GetObjectRequest method.
It's a bit surprising, so either the keyword example should work out of the box or the second example put in the README.

[kinesis] updating checkpoint can overflow

In the docs it suggests that in order to do explict checkpointing from your kinessis records processor you should set the checkpoint to Long/MAX_VALUE and return true from process_records. However later on in processes-factory checkpoint and System/currentTimeMillis are added together causing an integer overflow error.

https://github.com/mcohen01/amazonica/blob/master/src/amazonica/aws/kinesis.clj#L94

I suspect the quick answer is to update the docs (possibly describing the role that the checkpoint value plays). But I would that there are two follow up issues. First it's not particularly clear the role that :checkpoint plays as a time variable. Second if it's going to be overloaded like this it might make sense top be able to pass in an explicit value for "no automatic checkpointing" like { :checkpoint :no } to make the distinction a little more clear.

Thanks!

Support asynchronous clients

Almost all clients have an asynchronous counterpart as listed below. It'd be useful to have them supported by amazonica.

Here's an example of using one of them in clojure.

As far as I can see, as a general rule, the async client implements two methods of the form:

Future<Void> methodNameFromTheSyncClientAsync(RequestClass aRequest);
Future<Void> methodNameFromTheSyncClientAsync(RequestClass aRequest, \
  AsyncHandler<RequestClass,ResultClass> asyncHandler);

for each method of the sync version of the client.

coerce exception while adding a rule to the security group

Hi,

I am trying to add a new rule to a security group I created.

(http://docs.aws.amazon.com/AWSSdkDocsJava/latest/DeveloperGuide/authorize-ingress.html)

(ns aws-infra.security-groups
(:require [amazonica.core :as aws-core :refer [defcredential]]
[amazonica.aws.ec2 :as aws-ec2])))

(aws-ec2/authorize-security-group-ingress
:group-name "test-group"
:ip-permissions [{:cidr-ip "21.21.22.23/32"
:ip-protocol "tcp"
:from-port "22"
:to-port "22"}])

I get the below exception, can you please help ?

java.lang.ClassCastException: java.lang.String cannot be cast to java.lang.Character
RT.java:1087 clojure.lang.RT.intCast
core.clj:846 clojure.core/int
core.clj:314 amazonica.core/coerce-value
core.clj:508 amazonica.core/invoke-method
AFn.java:160 clojure.lang.AFn.applyToHelper

Trouble creating access keys for a user; no-arg method matching before one-arg method?

This looks like a really awesome library---I like that you're using reflection to solve everything at once rather than try to wrap one service at a time.

I'm just diving in but seem to have hit a snag.
Here's a minimal example:

(ns scratch
  (:require amazonica.core)
  (:use amazonica.aws.identitymanagement))

(def creds {:access-key "root-access-key"
            :secret-key "root-secret-key"})

(create-user creds :user-name "db")
;;This works fine; user is created

(create-access-key creds :user-name "db")
;;This returns a new access key, but it's for the root account (i.e., the same account as creds), not for the new "db" account.

The problem is that the :user-name isn't being taken into account, so access keys are created for the same user that owns the creds used to make the request, not the specified new IAM user.

The docs for the underlying CreateAccessKeyRequest object seem to match the get/set method model that you're reflecting against, so I have no idea why it doesn't work.
It seems like the no-argument CreateAccessKeyRequest is matching first.

Kinesis record reading: thawing non-nippy data

I have Kinesis records that contain Snappy-encoded blocks such that when a process that decodes each block yields a sequence of strings like this:

"{:foo 42} "
"{:bar 3.14} "
...
"{:qaz [1 2 3 4]} "

So each string is an EDN value (a basic Clojure literal).

Amazonica seems to assume that data must be nippy-serialized. So when I try:

(get-records :shard-iterator (get-shard-iterator "my-stream"
                                                 "shardId-000000000003"
                                                 "TRIM_HORIZON"))

I get:

CompilerException java.lang.Exception: Thaw failed: Uncompressed data?, compiling:(form-init1883914083661306631.clj:1:9)

Perhaps I am missing it, but I don't see a simple way to tell Amazonica not to do that and instead give me raw bytes, which I could decompress and decode however I like.
So, is there a way to do that?

So far I see that you have unwrap function that is used directly and indirectly by get-records and by processor-factory, respectively. Would be wonderful if I could supply my own version of unwrap, instead of nippy-thawing version of unwrap.

S3 list-objects throws org.xml.sax.SAXParseException

I'm trying to list the files in an S3 bucket using [amazonica "0.2.24"] and the following code:

(s3/list-objects :bucket-name "mybucket")

I get the exception listed below.

Caused by org.xml.sax.SAXParseException
Premature end of file.

  ErrorHandlerWrapper.java:  203  com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper/createSAXParseException
  ErrorHandlerWrapper.java:  177  com.sun.org.apache.xerces.internal.util.ErrorHandlerWrapper/fatalError
     XMLErrorReporter.java:  441  com.sun.org.apache.xerces.internal.impl.XMLErrorReporter/reportError
     XMLErrorReporter.java:  368  com.sun.org.apache.xerces.internal.impl.XMLErrorReporter/reportError
           XMLScanner.java: 1436  com.sun.org.apache.xerces.internal.impl.XMLScanner/reportFatalError

XMLDocumentScannerImpl.java: 1019 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl$PrologDriver/next
XMLDocumentScannerImpl.java: 606 com.sun.org.apache.xerces.internal.impl.XMLDocumentScannerImpl/next
XMLNSDocumentScannerImpl.java: 117 com.sun.org.apache.xerces.internal.impl.XMLNSDocumentScannerImpl/next
XMLDocumentFragmentScannerImpl.java: 510 com.sun.org.apache.xerces.internal.impl.XMLDocumentFragmentScannerImpl/scanDocument
XML11Configuration.java: 848 com.sun.org.apache.xerces.internal.parsers.XML11Configuration/parse
XML11Configuration.java: 777 com.sun.org.apache.xerces.internal.parsers.XML11Configuration/parse
XMLParser.java: 141 com.sun.org.apache.xerces.internal.parsers.XMLParser/parse
AbstractSAXParser.java: 1213 com.sun.org.apache.xerces.internal.parsers.AbstractSAXParser/parse
XmlResponsesSaxParser.java: 145 com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser/parseXmlInputStream
XmlResponsesSaxParser.java: 293 com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser/parseListBucketObjectsResponse
Unmarshallers.java: 76 com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller/unmarshall
Unmarshallers.java: 73 com.amazonaws.services.s3.model.transform.Unmarshallers$ListObjectsUnmarshaller/unmarshall
S3XmlResponseHandler.java: 62 com.amazonaws.services.s3.internal.S3XmlResponseHandler/handle
S3XmlResponseHandler.java: 31 com.amazonaws.services.s3.internal.S3XmlResponseHandler/handle
AmazonHttpClient.java: 795 com.amazonaws.http.AmazonHttpClient/handleResponse
AmazonHttpClient.java: 463 com.amazonaws.http.AmazonHttpClient/executeHelper
AmazonHttpClient.java: 257 com.amazonaws.http.AmazonHttpClient/execute
AmazonS3Client.java: 3623 com.amazonaws.services.s3.AmazonS3Client/invoke
AmazonS3Client.java: 3575 com.amazonaws.services.s3.AmazonS3Client/invoke
AmazonS3Client.java: 620 com.amazonaws.services.s3.AmazonS3Client/listObjects
NativeMethodAccessorImpl.java: -2 sun.reflect.NativeMethodAccessorImpl/invoke0
NativeMethodAccessorImpl.java: 62 sun.reflect.NativeMethodAccessorImpl/invoke
DelegatingMethodAccessorImpl.java: 43 sun.reflect.DelegatingMethodAccessorImpl/invoke
Method.java: 483 java.lang.reflect.Method/invoke
nil: -1 sun.reflect.GeneratedMethodAccessor54/invoke
DelegatingMethodAccessorImpl.java: 43 sun.reflect.DelegatingMethodAccessorImpl/invoke
Method.java: 483 java.lang.reflect.Method/invoke
Reflector.java: 93 clojure.lang.Reflector/invokeMatchingMethod
Reflector.java: 28 clojure.lang.Reflector/invokeInstanceMethod
core.clj: 726 amazonica.core/fn-call/fn
core.clj: 777 amazonica.core/intern-function/fn
RestFn.java: 421 clojure.lang.RestFn/invoke
replutils.clj: 74 unpacker.examples.replutils/unparsable-keys
REPL: 1 unpacker.examples.replutils/eval15366

creating security groups and adding rules

Hi,

I wanted to create security groups and assign rules to them. Also wanted to create a VPC and subnets. I do not see any api exposed for that. Can u help me how I should proceed ?

Thanks,
Murtaza

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.