Coder Social home page Coder Social logo

kappa's Introduction

kappa

https://travis-ci.org/garnaat/kappa.svg?branch=develop

Kappa is a command line tool that (hopefully) makes it easier to deploy, update, and test functions for AWS Lambda.

There are quite a few steps involved in developing a Lambda function. You have to:

  • Write the function itself
  • Create the IAM role required by the Lambda function itself (the executing role) to allow it access to any resources it needs to do its job
  • Add additional permissions to the Lambda function if it is going to be used in a Push model (e.g. S3, SNS) rather than a Pull model.
  • Zip the function and any dependencies and upload it to AWS Lambda
  • Test the function with mock data
  • Retrieve the output of the function from CloudWatch Logs
  • Add an event source to the function
  • View the output of the live function

Kappa tries to help you with some of this. It creates all IAM policies for you based on the resources you have told it you need to access. It creates the IAM execution role for you and associates the policy with it. Kappa will zip up the function and any dependencies and upload them to AWS Lambda. It also sends test data to the uploaded function and finds the related CloudWatch log stream and displays the log events. Finally, it will add the event source to turn your function on.

If you need to make changes, kappa will allow you to easily update your Lambda function with new code or update your event sources as needed.

Installation

The quickest way to get kappa is to install the latest stable version via pip:

pip install kappa

Or for the development version:

pip install git+https://github.com/garnaat/kappa.git

Quick Start

To get a feel for how kappa works, let's take a look at a very simple example contained in the samples/simple directory of the kappa distribution. This example is so simple, in fact, that it doesn't really do anything. It's just a small Lambda function (written in Python) that accepts some JSON input, logs that input to CloudWatch logs, and returns a JSON document back.

The structure of the directory is:

simple/
├── _src
│   ├── README.md
│   ├── requirements.txt
│   ├── setup.cfg
│   └── simple.py
├── _tests
│   └── test_one.json
└── kappa.yml.sample

Within the directory we see:

  • kappa.yml.sample which is a sample YAML configuration file for the project
  • _src which is a directory containing the source code for the Lambda function
  • _test which is a directory containing some test data

The first step is to make a copy of the sample configuration file:

cd simple
cp kappa.yml.sample kappa.yml

Now you will need to edit kappa.yml slightly for your use. The file looks like this:

---
name: kappa-simple
environments:
  dev:
    profile: <your profile here>
    region: <your region here>
    environment_variables:
      <key 1>: <value 1>
      <key 2>: <value 2>
    policy:
      resources:
        - arn: arn:aws:logs:*:*:*
          actions:
            - "*"
  prod:
    profile: <your profile here>
    region: <your region here>
    policy:
      resources:
        - arn: arn:aws:logs:*:*:*
          actions:
          - "*"
lambda:
  description: A very simple Kappa example
  handler: simple.handler
  runtime: python2.7
  memory_size: 128
  timeout: 3
  log_retention_policy: 7

The name at the top is just a name used for this Lambda function and other things we create that are related to this Lambda function (e.g. roles, policies, etc.).

The environments section is where we define the different environments into which we wish to deploy this Lambda function. Each environment is identified by a profile (as used in the AWS CLI and other AWS tools) and a region. You can define as many environments as you wish but each invocation of kappa will deal with a single environment. An environment can optionally contain environment variables as key-value pairs. Each environment section also includes a policy section. This is where we tell kappa about AWS resources that our Lambda function needs access to and what kind of access it requires. For example, your Lambda function may need to read from an SNS topic or write to a DynamoDB table and this is where you would provide the ARN (Amazon Resource Name) that identifies those resources. Since this is a very simple example, the only resource listed here is for CloudWatch logs so that our Lambda function is able to write to the CloudWatch log group that will be created for it automatically by AWS Lambda.

The lambda section contains the configuration information about our Lambda function. These values are passed to Lambda when we create the function and can be updated at any time after. log_retention_policy is an optional parameter. When supplied, it defines the number of days our Lambda function Cloudwatch logs kept for. By default, these logs are never removed.

To modify this for your own use, you just need to put in the right values for profile and region in one of the environment sections. You can also change the names of the environments to be whatever you like but the name dev is the default value used by kappa so it's kind of handy to avoid typing.

Once you have made the necessary modifications, you should be ready to deploy your Lambda function to the AWS Lambda service. To do so, just do this:

kappa deploy

This assumes you want to deploy the default environment called dev and that you have named your config file kappa.yml. If, instead, you called your environment test and named your config file foo.yml, you would do this:

kappa --env test --config foo.yml deploy

In either case, you should see output that looks something like this:

kappa deploy
# deploying
# ...deploying policy kappa-simple-dev
# ...creating function kappa-simple-dev
# done

So, what kappa has done is it has created a new Managed Policy called kappa-simple-dev that grants access to the CloudWatch Logs service. It has also created an IAM role called kappa-simple-dev that uses that policy. And finally it has zipped up our Python code and created a function in AWS Lambda called kappa-simple-dev.

To test this out, try this:

kappa invoke _tests/test_one.json
# invoking
# START RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f Version: $LATEST
# [DEBUG]   2015-12-08T22:00:15.363Z        0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f    {u'foo': u'bar', u'fie': u'baz'}
# END RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f
# REPORT RequestId: 0f2f9ecf-9df7-11e5-ae87-858fbfb8e85f    Duration: 0.40 ms       Billed Duration: 100 ms         Memory Size: 256 MB     Max Memory Used: 23 MB
#
# Response:
# {"status": "success"}
# done

We have just called our Lambda function, passing in the contents of the file _tests/test_one.json as input to our function. We can see the output of the CloudWatch logs for the call and we can see the logging call in the Python function that prints out the event (the data) passed to the function. And finally, we can see the Response from the function which, for now, is just a hard-coded data structure returned by the function.

Need to make a change in your function, your list of resources, or your function configuration? Just go ahead and make the change and then re-run the deploy command:

kappa deploy

Kappa will figure out what has changed and make the necessary updates for you.

That gives you a quick overview of kappa. To learn more about it, I recommend you check out the tutorial.

Policies

Hands up who loves writing IAM policies. Yeah, that's what I thought. With Kappa, there is a simplified way of writing policies and granting your Lambda function the permissions it needs.

The simplified version allows you to specify, in your kappa.yml file, the ARN of the resource you want to access, and then a list of the API methods you want to allow. For example:

policy:
  resources:
    - arn: arn:aws:logs:*:*:*
      actions:
        - "*"

To express this using the official IAM policy format, you can instead use a statement:

policy:
  statements:
    - Effect: Allow
      Resource: "*"
      Action:
        - "logs:*"

Both of these do the same thing.

kappa's People

Contributors

astewart-twist avatar bruno-carrier-lookout avatar christophermanning avatar coopernurse avatar garnaat avatar gumuz avatar iserko avatar josegonzalez avatar laiso avatar pas256 avatar rodrigosaito avatar ryansb avatar samuel-soubeyran avatar wvidana avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kappa's Issues

question re log group

Hi,
On running the step "Run kappa --config tail to view the functions output in CloudWatch logs", I'm getting an exception:

console excerpt

stimpy ::‹develop*› /home/dacaba/repos/kappa/samples/kinesis
» kappa --debug --config config.yml tail 1 ↵
2015-01-16 10:14:21,763 - kappa.function - DEBUG - tailing function: KinesisSample
2015-01-16 10:14:21,767 - kappa.log - DEBUG - tailing log group: /aws/lambda/KinesisSample
2015-01-16 10:14:21,767 - kappa.log - DEBUG - getting streams for log group: /aws/lambda/KinesisSample
Traceback (most recent call last):
File "/usr/local/bin/kappa", line 5, in
pkg_resources.run_script('kappa==0.1.0', 'kappa')
...

stuff elided

...
raise ClientError(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (ResourceNotFoundException) when calling the DescribeLogStreams operation: The specified log group does not exist.

end console excerpt

For sake of this experiment, I created a kinesis stream, and used its arn to update the config.yml file. Should I also have created the /aws/lambda/KinesisSample Cloudwatch log group and log stream, or is kappa supposed to be handling that?

Thank you,
drc

S3 Event Source fails on update

The S3 event source causes any "update_event_sources" command to fail because it's missing a method.

Traceback (most recent call last):
  File "/home/ryansb/.pyenv/versions/hugo-lambda/bin/kappa", line 155, in <module>
    cli(obj={})
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/click/core.py", line 664, in __call__
    return self.main(*args, **kwargs)
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/click/core.py", line 644, in main
    rv = self.invoke(ctx)
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/click/core.py", line 991, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/click/core.py", line 837, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/click/core.py", line 464, in invoke
    return callback(*args, **kwargs)
  File "/home/ryansb/.pyenv/versions/hugo-lambda/bin/kappa", line 150, in update_event_sources
    context.update_event_sources()
  File "/home/ryansb/.pyenv/versions/hugo-lambda/lib/python2.7/site-packages/kappa/context.py", line 123, in update_event_sources
    event_source.update(self.function)
AttributeError: 'S3EventSource' object has no attribute 'update'

Kappa with Read-Only IAM Permissions

Hey Mitch!

First of all. thanks for writing Kappa.. just what I needed!

Unfortunately, for the project I'm currently working on, I am just a lowly developer with read only access to IAM - write access is guarded by the high and mighty guys in operations!

They've very kindly created a AWSLambdaExecute policy name for me, but it looks like Kappa requires my own account to have IAM write access to call the 'create' command for the first deployment, and update_code won't work until there is something there to update.

Any suggestions for running Kappa without being an administrator?

Even having the documentation include a minimum viable set of roles would be helpful - but even better would be a way to deploy the initial code without having to call the permissions stuff, so either have the initial work flow be:

create_policy_and_roles
deploy_code
update_code

so that I could just skip the create_users step, or, perhaps more simply, just have

update_code

call '''create_function''' in the event that there is no currently deployed code.

What do you think?

Thanks!
R

S3 Event Source status function not working as expected

The S3 Event Source status function is not working as expected. This is tangentially related to issue #88, #89. The code is using the deprecated get-bucket-notification which only returns the first notification. This may or may not be the expected notification.

Also the function is returning the EventSourceArn as the function arn. This should be the bucket arn.

Hard-coded excluded_dirs

function.py hard-codes 8 directories that will always be excluded. But this breaks deployment of any projects that happen to use those packages. I discovered this when the "jmespath" dependency in my project mysteriously refused to be included in the deployed package. Some code-spelunking revealed the hard-coded list in function.py.

Any chance of just removing that list entirely? As far as I can tell kappa is not caching any of those dependencies in my src folder anyway, so I don't think it should need to exclude them.

use aws-keychain for credentials

I don't usually write out my credentials into a credential file, but store them using aws-keychain. Would it be possible to have an option to not have profile mandatory?

Include python dependencies

Hi All,

Two feature suggestions...

Pull in python dependencies
Is there any appetite to have a command line option that pulls in python dependencies during the the deploy command?

Separate source and build directories
Executing pip install -r requirements.txt -t /path/to/source/ creates a lot of noise in the source directory. It would be great if we could create a separate build directory to create the lambda zip package.

I've got some time to spend on this if people think it could be useful.

Cheers,

Anthony.

IAM caching: better to query against actual state instead?

Would it make more sense to, when a deploy happens, query the role and policy in AWS and compare their contents against the local info? This would help overwrite changes made directly to those policies and roles if, for example, extra permissions were given for testing.

Support Python3

running unit test cases with Py3, failed due to a MD5 compare checking in context.py

Exception if _resources_ is missing

If statements is used instead of resources, kappa throws a KeyError.

Stacktrace:

deploying
...deploying policy ses_send_mail_dev
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/bin//kappa", line 9, in <module>
    load_entry_point('kappa==0.3.1', 'console_scripts', 'kappa')()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 716, in __call__
    return self.main(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 696, in main
    rv = self.invoke(ctx)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 1060, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 889, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/decorators.py", line 64, in new_func
    return ctx.invoke(f, obj, *args[1:], **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/click/core.py", line 534, in invoke
    return callback(*args, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kappa/scripts/cli.py", line 58, in deploy
    ctx.deploy()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kappa/context.py", line 222, in deploy
    self.policy.deploy()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kappa/policy.py", line 138, in deploy
    document = self.document()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kappa/policy.py", line 52, in document
    for resource in self.config['policy']['resources']:
KeyError: 'resources'

Kappa config:

 dev:
    profile: default
    region: eu-west-1
    policy:
        statements:
            - Effect: Allow
              Resource: "*"
              Action:
                - "logs:*"
            - Effect: Allow
              Resource: "*"
              Action:
                - "ses:Send*"

starting troubles

First off, i wanted to mention how awesome this project idea is and thank you for implementing it. I am just starting off with trying kappa and ran into the following issue when trying to run kappa for the first time. The readme seems to be out of date with the current commandline options. The commandline help doesn't really say what it expects for the "config" placeholder. I tried passing in the file name "kappa.yml" for the config option and ended up with the following error. Any thoughts/suggestions on what could be going wrong here?

(lambdaStats) osboxes@osboxes:~/projects/lambdaStats2> kappa --debug kappa.yml status
Traceback (most recent call last):
File "/home/osboxes/envs/lambdaStats/bin/kappa", line 155, in
cli(obj={})
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/click/core.py", line 664, in call
return self.main(_args, *_kwargs)
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/click/core.py", line 644, in main
rv = self.invoke(ctx)
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/click/core.py", line 991, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/click/core.py", line 837, in invoke
return ctx.invoke(self.callback, *_ctx.params)
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/click/core.py", line 464, in invoke
return callback(_args, **kwargs)
File "/home/osboxes/envs/lambdaStats/bin/kappa", line 98, in status
status = context.status()
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/kappa/context.py", line 173, in status
status['function'] = self.function.status()
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/kappa/function.py", line 192, in status
LOG.debug('getting status for function %s', self.name)
File "/home/osboxes/envs/lambdaStats/local/lib/python2.7/site-packages/kappa/function.py", line 38, in name
return self._config['name']
KeyError: 'name'

Error Running Current Pip Version

Hello,

When I try and run any kappa command after installing via pip (v0.3.0), I get the following error:

$ kappa config.yml status
Traceback (most recent call last):
  File "/Users/me/venv_aws/bin/kappa", line 155, in <module>
    cli(obj={})
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 664, in __call__
    return self.main(*args, **kwargs)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 644, in main
    rv = self.invoke(ctx)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 991, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 837, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 464, in invoke
    return callback(*args, **kwargs)
  File "/Users/me/venv_aws/bin/kappa", line 97, in status
    context = Context(ctx.obj['config'], ctx.obj['debug'])
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/context.py", line 39, in __init__
    self, self.config['iam']['policy'])
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/policy.py", line 26, in __init__
    aws = kappa.aws.get_aws(context)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/aws.py", line 37, in get_aws
    __Singleton_AWS = __AWS(context.profile, context.region)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/aws.py", line 22, in __init__
    region_name=region_name, profile_name=profile_name)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/boto3/session.py", line 76, in __init__
    self._setup_loader()
  File "/Users/me/venv_aws/lib/python2.7/site-packages/boto3/session.py", line 96, in _setup_loader
    [self._loader.data_path,
AttributeError: 'Loader' object has no attribute 'data_path'

If I upgrade boto3 to 0.0.18 or higher (from 0.0.16 as is installed by setup.py) with pip (I notice 0.0.21 is out now), most features seem to work.

Error: pkg_resources._vendor.packaging.requirements.InvalidRequirement

Hi,
I installed the latest development branch with

pip install git+https://github.com/garnaat/kappa.git

and then when I run kappa, I get an error:

console> kappa
Traceback (most recent call last):
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/bin/kappa", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2912, in <module>
    @_call_aside
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2898, in _call_aside
    f(*args, **kwargs)
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2925, in _initialize_master_working_set
    working_set = WorkingSet._build_master()
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 642, in _build_master
    ws.require(__requires__)
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 943, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 838, in resolve
    new_requirements = dist.requires(req.extras)[::-1]
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2462, in requires
    dm = self._dep_map
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2686, in _dep_map
    self.__dep_map = self._compute_dependencies()
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/__init__.py", line 2696, in _compute_dependencies
    current_req = packaging.requirements.Requirement(req)
  File "/Users/rabraham/Documents/dev/python/virtual_envs/tracker_env/lib/python2.7/site-packages/pkg_resources/_vendor/packaging/requirements.py", line 94, in __init__
    requirement_string[e.loc:e.loc + 8]))
pkg_resources._vendor.packaging.requirements.InvalidRequirement: Invalid requirement, parse error at "'and plat'"

When I install kappa with pip install kappa it works but I am trying to get the latest version which reflects the Github docs.

SNS Topic subscriptions not working(?)

I've configured some SNS Topics as event sources for my Lambda functions but it doesn't work correctly:

  1. The subscription is created correctly.
  2. Lambda function is not triggered when I publish some message at the subscribed topic.
  3. The subscription does not appears in "Triggers" tab at AWS Console

To make it works as expected I need to run the command at item 4 of AWS Lambda Documentation below:

http://docs.aws.amazon.com/lambda/latest/dg/with-sns-create-x-account-permissions.html

This step add SNS Topic as a trigger (event source) of Lambda function.

Cannot create Event Sources

Hi,

I love kappa and just started using it. Its very well written.

But, for some reason I cannot get "Event Sources" to connect. Here is the error:

$ kappa ./config.yml add_event_sources 
adding event sources...
        Unable to add S3 event source
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/kappa/event_source.py", line 142, in add
    NotificationConfiguration=notification_spec)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/client.py", line 258, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/botocore/client.py", line 312, in _make_api_call
    raise ClientError(parsed_response, operation_name)
ClientError: An error occurred (InvalidArgument) when calling the PutBucketNotification operation: Unable to validate the following destination configurations
...done

Here is my config file:


---
# Change the profile and region to suit your application
region: us-east-1
iam:
  # In this case, we are using an existing managed policy so we just
  # need to put the name of that policy here.
  policy:
    name: AWSLambdaExecute
  # The name of the IAM role used for executing the Lambda function.
  # The policy listed above will be attached to this role once it is created.
  role:
    name: tennis-lambda-s3-execution-role
lambda:
  name: cronitor_handler
  zipfile_name: cronitor_handler.zip
  description: Monitor s3 updates for Cronitor
  path:  cronitor_handler.py
  handler: lambda_function.cronitor_handler
  runtime: python2.7
  memory_size: 128
  timeout: 10
  mode: event
  test_data: event.json
  event_sources:
    - arn: arn:aws:s3:::test-ngd-db-backups
      events:
        - s3:ObjectCreated:*

Any ideas what I may be doing wrong?

Thanks
-T

Support S3 code (avoid creating zip)

First off, thanks for creating this tool. I like the simplicity.

On my project I'm using Java, so I already have build tooling in place via mvn package. This produces a single JAR that contains several lambda functions.

Consequently I'd like to be able to do something like this:

#!/bin/bash
set -e

# build JAR and upload to S3
mvn clean package
aws s3 cp target/myproj-with-deps.jar s3://mybucket/myproj.jar

# use kappa to register lambdas
kappa --config lambda/func1.yml deploy
kappa --config lambda/func2.yml deploy

I have a quick and dirty version of the above working on my fork. I augmented the lambda section of the config to support an option code block. If this block is present, the S3 keys are set in the create_function and update_function_code calls. For example:

---
name: hello
lambda:
  # new optional section - if present, no ZIP is created and S3 code is used
  code:
    bucket: mybucket
    key: myproj.jar    
  description: Hello Cats
  handler: com.bitmechanic.foo.LambdaHello::handleRequest
  runtime: java8
  memory_size: 128
  timeout: 10

One consequence of my current implementation is that the config file is not generated in the S3 case, as this is done as part of the ZIP bundling. This is what I'd expect, as I'm not relying on kappa to bundle my code for me, so I wouldn't expect configuration manifests to be injected into my JAR.

Would you entertain a PR with this work?

I'm open to feedback on the YAML changes. I also implemented this by factoring out all the ZIP stuff into a separate class in function.py such there are S3Code and ZipCode classes. Function.update and Function.create delegate to these classes.

Before I submit a PR I need to clean things up, write docs, etc. But before I got too far I wanted to open up the conversation here and get preliminary feedback on the enhancement.

Thanks!

Updating cloudwatch event source causes trigger to show multiple times in AWS console

When updating a cloudwatch event source, the add function is called

def update(self, function):
        self.add(function)

The add function does 3 things:

  • It calls events put-rule which seems to be fine as it firstly creates the rule and then updates the same rule on later calls.
  • It calls events put-targets which seems to be fine as it firstly creates the target and then updates the same target on later calls.
  • It calls lambda add-permission. This seems to be the issue. Each call uses a unique "statement-id". This results in multiple statements being added. This can be seen by calling get-policy:

aws lambda get-policy --function-name <function_name>

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Condition": {
        "ArnLike": {
          "AWS:SourceArn": "arn:aws:events:us-east-1:<account>:rule/<rule_name>"
        }
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:us-east-1:<account>:function:<function_name>",
      "Effect": "Allow",
      "Principal": {
        "Service": "events.amazonaws.com"
      },
      "Sid": "df9ee7d2-2fd9-4058-9637-97b4c35a7d2a"
    },
    {
      "Condition": {
        "ArnLike": {
          "AWS:SourceArn": "arn:aws:events:us-east-1:<account>:rule/rule_name"
        }
      },
      "Action": "lambda:InvokeFunction",
      "Resource": "arn:aws:lambda:us-east-1:<account>:function:<function_name>",
      "Effect": "Allow",
      "Principal": {
        "Service": "events.amazonaws.com"
      },
      "Sid": "e3867f15-1b94-4d4c-a3f1-205865b9a5a7"
    }
  ],
  "Id": "<id>"
}

This results in the Rule showing up multiple times under "Triggers" for the lambda function in the AWS console.

Perhaps when calling update the old statement id should be used? Or perhaps "add_permission" should not be called on update? Or is this an issue that should be raised with AWS?

S3 Event Source delete function not working as expected

The S3 Event Source delete function is not working as expected. This is tangentially related to issue #88. The delete function expects that only one notification has been set up. It checks the first entry in"CloudFunctionConfiguration" to see if the function arn matches. If it does, it removes the whole "CloudFunctionConfiguration" block.

This in incorrect for two reasons:

  • If a different notification has been set up and appears before the expected notification, the expected notification will not be deleted.
  • If a different notification has been set up and appears after the expected notification, the expected notification and the other notification will both be deleted.

EDIT:
It's not that the code checks for the first entry in "CloudFunctionConfiguration" it's that it uses the deprecated get-bucket-notification which only returns the first notification. The result is still the same.

Make new release to match readme

Hi! When I run kappa deploy, it doesn't actually run as the latest release is from June, while the package has since been updated significantly (50 or so commits). Would be great to have a released version with all the great new stuff out :)

Python support

As introduced on re:invent 2015, AWS Lambda now supports python as runtime language.

Existing role configuration

Does kappa support using existing IAM roles? (I'm assuming it does based on the kinesis sample)
I see the kinesis sample with the "iam" block but it doesn't seem to include the "environments" block.

I just get an error saying "Invalid environment dev specified"

Cache for function data could live in S3 metadata

If S3 is being used for the code, the cache contents related to the function code could live in the metadata on the S3 object. This would reduce the amount of state maintained locally outside of version control.

Minor Documentation Error ('add-event-sources')

Small documentation error: in README.md, 'add_event_sources' is referred to as both 'add-event-sources' and 'add_event_source', neither of which are correct. Simple find and replace should fix!

Tx
R

Use IAM Role Granted to EC2 Instance Kappa is Run From

Hello -

Is there any way to get kappa to pickup the role permissions that a particular ec2 instance has instead of relying on a profile name?

The profile name works well on my local machine, but when I have to build libraries in EC2 and test before deployment, I'm having a hard time leveraging kappa for automatic deployment as its looking for aws config parameters to match that in kappa.yml

I can use boto within python just fine and it picks up the permissions from the host system, but somehow I think kappa is trying to force a specific profile name when setting up boto

KeyError 'EventSourceArn' when running status

Hi, I am very new to Kappa, and was using it to check the status of an existing lambda task we are using, and I get a key error for 'EventSourceArn':

  File "/Users/ses/w/lemonade-tasks/venv/bin/kappa", line 125, in status
    event_source['EventSourceArn'], event_source['State'])
KeyError: 'EventSourceArn'

(I suppressed the rest of the stack trace, it was just a few frames of click internals).

Here is the event_source dict (the only event source I have for this lambda function in fact):

{u'Endpoint': 'arn:aws:lambda:eu-west-1:575.....:function:sendEmailVerificationEmail',
 u'Owner': '575.....',
 u'Protocol': 'lambda',
 u'SubscriptionArn': 'arn:aws:sns:eu-west-1:575.....:accounts-email_verify_started:89abcdef-1234-1234-1234-456789abcdef',
 u'TopicArn': 'arn:aws:sns:eu-west-1:575.....:accounts-email_verify_started'}

I suppressed our actual IDs and stuff.

I wouldn't mind fixing this issue myself, but as I am so new to lambda I'm not sure why it is happening or which of these fields should replace the EventSourceArn in this case, or if this is a case of me having an incorrect boto version or something (boto3==1.1.0, kappa==0.3.1 FYI).

Deleting a function should delete its log group

If you don't delete the log group associated with a function and then you recreate the function, you can start getting InvalidAccessKey errors when you run your Lambda function. Deleting the log group will prevent this error.

Following symlinks

I'm sharing some code between several lambda functions. Instead of creating a module and installing it in each _src directory I am finding it easier to symlink to the common code like so:

func1/_src/common -> ../../common
func2/_src/common -> ../../common

In order to make this work for the zipfile creation I need to change os.walk(lambda_dir) to os.walk(lambda_dir, followlinks=True) in function.py.

Would you be open to a PR to make this configurable or is there a better way to share code between lambda functions?

KeyError: 'lastEventTimestamp'

Happened while tailing..

tailing logs...
Traceback (most recent call last):
  File "/Projects/lambdas/python/env/bin/kappa", line 155, in <module>
    cli(obj={})
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/click/core.py", line 664, in __call__
    return self.main(*args, **kwargs)
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/click/core.py", line 644, in main
    rv = self.invoke(ctx)
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/click/core.py", line 991, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/click/core.py", line 837, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/click/core.py", line 464, in invoke
    return callback(*args, **kwargs)
  File "/Projects/lambdas/python/env/bin/kappa", line 89, in tail
    for e in context.tail()[-10:]:
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/kappa/context.py", line 150, in tail
    return self.function.tail()
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/kappa/function.py", line 97, in tail
    return self.log.tail()
  File "/Projects/lambdas/python/env/lib/python2.7/site-packages/kappa/log.py", line 57, in tail
    elif stream['lastEventTimestamp'] > latest_stream['lastEventTimestamp']:
KeyError: 'lastEventTimestamp'

Normally works fine, not sure why it happened those few times. Cleared up shortly with no other changes. Just thought you should be aware.

Error if you are trying to use an existing role

>> kappa config.yml create
creating...
    Error creating Role
Traceback (most recent call last):
  File "/opt/boxen/homebrew/lib/python2.7/site-packages/kappa/role.py", line 80, in create
    AssumeRolePolicyDocument=AssumeRolePolicyDocument)
  File "/opt/boxen/homebrew/lib/python2.7/site-packages/botocore/client.py", line 197, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/opt/boxen/homebrew/lib/python2.7/site-packages/botocore/client.py", line 252, in _make_api_call
    raise ClientError(parsed_response, operation_name)
ClientError: An error occurred (EntityAlreadyExists) when calling the CreateRole operation: Role with name lambda_exec_role already exists.
...done

Please release Kappa 0.6.1

Would be nice if we could get a small version bump after the recent S3-related merge, I'd like to do a new Zappa release today or tomorrow that depends on that commit.

unable to import module 'index' error....

i'm getting this error after uploading and invoking the function from a mobile application....

2015-06-29T15:45:37: Unable to import module 'index': Error
    at Function.Module._resolveFilename (module.js:338:15)
    at Function.Module._load (module.js:280:25)
    at Module.require (module.js:364:17)
    at require (module.js:380:17)
    at Object.<anonymous> (/var/task/index.js:2:1)
    at Module._compile (module.js:456:26)
    at Object.Module._extensions..js (module.js:474:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:364:17)

my directory structure is ....

  • index.js
  • node-modules/
  • config.yml
  • event.json
  • package.json

Make Kappa More Usable as a Library

I'm now using Kappa as a dependency in Zappa, but this requires from fairly ugly hacking: https://github.com/Miserlou/Zappa/blob/master/zappa/util.py#L97 This may be too much to ask for, but there are few places where it'd be really great if Kappa acted a bit more like a library and less like a client.

Specifically, the ability to pass in our own Id and LambdaArns rather than them being generated , and returning responses in addition to logging where possible. That - and comments in the code, please!

Anyway, this might be a stretch but I figure I'd ask anyway. Thanks for all the great work!

Log Group Has Not Been Created

Not sure what to make of this one.

$ kappa MyEvent.yaml tail
tailing logs...
    log group /aws/lambda/MyEvent has not been created yet
...done

Shouldn't this have been done during the create step?

I have other Lambda functions for this bucket that work fine. I can also execute this event in the console just fine and I see the log there.

What's going on here?

Problem adding s3 event source

When trying to add_event_sources (having already succesfully run a kappa config.yml create and a kappa config.yml invoke) that are s3 (the following is reproduced based on the s3 sample in the repo, albeit in eu-west-1), I get the following error:

$ kappa config.yml add_event_sources
adding event sources...
Traceback (most recent call last):
  File "/Users/me/venv_aws/bin/kappa", line 155, in <module>
    cli(obj={})
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 664, in __call__
    return self.main(*args, **kwargs)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 644, in main
    rv = self.invoke(ctx)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 991, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 837, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 464, in invoke
    return callback(*args, **kwargs)
  File "/Users/me/venv_aws/bin/kappa", line 142, in add_event_sources
    context.add_event_sources()
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/context.py", line 119, in add_event_sources
    event_source.add(self.function)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/kappa/event_source.py", line 139, in add
    'InvocationRole': self._context.invoke_role_arn}}
AttributeError: 'Context' object has no attribute 'invoke_role_arn'

I've found the invoke_role_arn function was removed in commit 2bbf5fa.

Additionally if I manually add the event source, I get the following error when trying to get the status (this is having upgraded boto3 to 0.0.18 to make the package work at all):

$ kappa config.yml status
Policy
    TestLambdaPolicy201506151108 (arn:aws:iam::505016xxxxxx:policy/kappa/TestLambdaPolicy201506151108)
Role
    TestLambdaRole201506151108 (arn:aws:iam::505016xxxxxx:role/kappa/TestLambdaRole201506151108)
Function
    S3Sample201506151108 (arn:aws:lambda:eu-west-1:505016xxxxxx:function:S3Sample201506151108)
Event Sources
Traceback (most recent call last):
  File "/Users/me/venv_aws/bin/kappa", line 155, in <module>
    cli(obj={})
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 664, in __call__
    return self.main(*args, **kwargs)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 644, in main
    rv = self.invoke(ctx)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 991, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 837, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/me/venv_aws/lib/python2.7/site-packages/click/core.py", line 464, in invoke
    return callback(*args, **kwargs)
  File "/Users/me/venv_aws/bin/kappa", line 124, in status
    event_source['EventSourceArn'], event_source['State'])
KeyError: 'EventSourceArn'

S3 Event Source should add permission for S3 bucket to invoke function

Currently an S3 event source does not work out of the box as you would expect:

    event_sources:
      - arn: arn:aws:s3:::<bucket>
        events:
          - s3:ObjectCreated:*
        key_filters:
          - type: prefix
            value: <prefix>/
          - type: suffix
            value: <suffix>

The issue is that the S3 bucket does not have permissions to invoke the function. The workaround is to add the permission in the "permissions" section under "lambda":

lambda:
  description: <description>
  handler: <function>.lambda_handler
  runtime: python2.7
  memory_size: 128
  timeout: 300
  permissions:
    - action: lambda:invokeFunction
      principal: s3.amazonaws.com
      SourceArn: arn: arn:aws:s3:::<bucket>

This permission should automatically be added in the S3 event source add function similar to how the cloudwatch event source add function handles adding the relevant permission:

e.g. something like

               response = self._lambda.call('add_permission',
                                             FunctionName=function.name,
                                             StatementId=str(uuid.uuid4()),
                                             Action='lambda:InvokeFunction',
                                             Principal='s3.amazonaws.com',
                                             SourceArn=self._get_bucket_name())

Feature suggestions

I've implemented a few changes in my fork. They are, I think, mostly orthogonal to the python-refactor branch changes. Let me know if you'd be interested in a pull request for any of them.

  1. Make config file an option, default to looking for kappa.yml or kappa.yaml in any parent folder.
  2. Allow multiple policies. Automatically add an inline policy for CloudWatch logs to role. Allow policy documents to be specified inline
  3. Default to using 'src/' if lambda.path is not provided in config file.
  4. Use the name of the directory containing the kappa.ya?ml file as the default name for function and role if they are not provided.
  5. Commandline input for invoke

Error turning sample into Python. Python AWS Lambda sample?

I'm trying to adapt the s3 sample based on node.js to python. I tried by modifying the lambda section of the config.yml:

lambda:
  name: S3PythonSample
  zipfile_name: S3PythonSample.zip
  description: Testing S3 Lambda Python handler
  path: examplefolder/
  handler: hello_lambda.lambda_handler
  runtime: python

However, I get this error after running `kappa ./config.yml create':

$ kappa ./config.yml create 
creating...
/Library/Python/2.7/site-packages/botocore/vendored/requests/packages/urllib3/connection.py:251: SecurityWarning: Certificate has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning
/Library/Python/2.7/site-packages/botocore/vendored/requests/packages/urllib3/connection.py:251: SecurityWarning: Certificate has no `subjectAltName`, falling back to check for a `commonName` for now. This feature is being removed by major browsers and deprecated by RFC 2818. (See https://github.com/shazow/urllib3/issues/497 for details.)
  SecurityWarning
        Unable to upload zip file
Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/kappa/function.py", line 162, in create
    MemorySize=self.memory_size)
  File "/Library/Python/2.7/site-packages/botocore/client.py", line 258, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Library/Python/2.7/site-packages/botocore/client.py", line 312, in _make_api_call
    raise ClientError(parsed_response, operation_name)
ClientError: An error occurred (ValidationException) when calling the CreateFunction operation: 1 validation error detected: Value 'python' at 'runtime' failed to satisfy constraint: Member must satisfy enum value set: [nodejs]
        Unable to add permission
Traceback (most recent call last):
  File "/Library/Python/2.7/site-packages/kappa/function.py", line 141, in add_permissions
    response = self._lambda_svc.add_permission(**kwargs)
  File "/Library/Python/2.7/site-packages/botocore/client.py", line 258, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/Library/Python/2.7/site-packages/botocore/client.py", line 312, in _make_api_call
    raise ClientError(parsed_response, operation_name)

Zip is created uncompressed

I'm happy to write a fix, but I'm not sure which direction you'd like to go

  1. compress the zipfile by default
  2. compress the zipfile if a flag is passed (e.g. --compress)
  3. compress by default, allow a --no-compress flag
  4. automatically decide based on the size of the zip

Thoughts?

Support for CloudWatch Events as event source

I am using this as event source, this is a scheduled event on cloud watch:

event_sources:
  - arn: arn:aws:events:us-east-1:XXXX:rule/some-rule

And getting this error:

ValueError: Unknown event source: arn:aws:events:us-east-1:XXXX:rule/some-rule

I checked the code and CloudWatch events seems to not be implemented as a type of event source.

I can help with a PR if you are able to give me the directions on what I need to do to implement it.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.