Coder Social home page Coder Social logo

zappa / zappa Goto Github PK

View Code? Open in Web Editor NEW
3.1K 36.0 354.0 5.52 MB

Serverless Python

Home Page: https://zappa.ws/zappa

License: MIT License

Python 99.54% Shell 0.02% Makefile 0.44%
zappa api-gateway alb aws-lambda python serverless serverless-framework flask django pyramid

zappa's Introduction

Zappa Rocks!

Zappa - Serverless Python

CI Coverage PyPI Slack

About

Zappa Slides

In a hurry? Click to see (now slightly out-dated) slides from Serverless SF!

Zappa makes it super easy to build and deploy server-less, event-driven Python applications (including, but not limited to, WSGI web apps) on AWS Lambda + API Gateway. Think of it as "serverless" web hosting for your Python apps. That means infinite scaling, zero downtime, zero maintenance - and at a fraction of the cost of your current deployments!

If you've got a Python web app (including Django and Flask apps), it's as easy as:

$ pip install zappa
$ zappa init
$ zappa deploy

and now you're server-less! Wow!

What do you mean "serverless"?

Okay, so there still is a server - but it only has a 40 millisecond life cycle! Serverless in this case means "without any permanent infrastructure."

With a traditional HTTP server, the server is online 24/7, processing requests one by one as they come in. If the queue of incoming requests grows too large, some requests will time out. With Zappa, each request is given its own virtual HTTP "server" by Amazon API Gateway. AWS handles the horizontal scaling automatically, so no requests ever time out. Each request then calls your application from a memory cache in AWS Lambda and returns the response via Python's WSGI interface. After your app returns, the "server" dies.

Better still, with Zappa you only pay for the milliseconds of server time that you use, so it's many orders of magnitude cheaper than VPS/PaaS hosts like Linode or Heroku - and in most cases, it's completely free. Plus, there's no need to worry about load balancing or keeping servers online ever again.

It's great for deploying serverless microservices with frameworks like Flask and Bottle, and for hosting larger web apps and CMSes with Django. Or, you can use any WSGI-compatible app you like! You probably don't need to change your existing applications to use it, and you're not locked into using it.

Zappa also lets you build hybrid event-driven applications that can scale to trillions of events a year with no additional effort on your part! You also get free SSL certificates, global app deployment, API access management, automatic security policy generation, precompiled C-extensions, auto keep-warms, oversized Lambda packages, and many other exclusive features!

And finally, Zappa is super easy to use. You can deploy your application with a single command out of the box!

Awesome!

Zappa Demo Gif

Installation and Configuration

Before you begin, make sure you are running Python 3.8/3.9/3.10/3.11/3.12 and you have a valid AWS account and your AWS credentials file is properly installed.

Zappa can easily be installed through pip, like so:

$ pip install zappa

Please note that Zappa must be installed into your project's virtual environment. The virtual environment name should not be the same as the Zappa project name, as this may cause errors.

(If you use pyenv and love to manage virtualenvs with pyenv-virtualenv, you just have to call pyenv local [your_venv_name] and it's ready. Conda users should comment here.)

Next, you'll need to define your local and server-side settings.

Running the Initial Setup / Settings

Zappa can automatically set up your deployment settings for you with the init command:

$ zappa init

This will automatically detect your application type (Flask/Django - Pyramid users see here) and help you define your deployment configuration settings. Once you finish initialization, you'll have a file named zappa_settings.json in your project directory defining your basic deployment settings. It will probably look something like this for most WSGI apps:

{
    // The name of your stage
    "dev": {
        // The name of your S3 bucket
        "s3_bucket": "lambda",

        // The modular python path to your WSGI application function.
        // In Flask and Bottle, this is your 'app' object.
        // Flask (your_module.py):
        // app = Flask()
        // Bottle (your_module.py):
        // app = bottle.default_app()
        "app_function": "your_module.app"
    }
}

or for Django:

{
    "dev": { // The name of your stage
       "s3_bucket": "lambda", // The name of your S3 bucket
       "django_settings": "your_project.settings" // The python path to your Django settings.
    }
}

Psst: If you're deploying a Django application with Zappa for the first time, you might want to read Edgar Roman's Django Zappa Guide.

You can define as many stages as your like - we recommend having dev, staging, and production.

Now, you're ready to deploy!

Basic Usage

Initial Deployments

Once your settings are configured, you can package and deploy your application to a stage called "production" with a single command:

$ zappa deploy production
Deploying..
Your application is now live at: https://7k6anj0k99.execute-api.us-east-1.amazonaws.com/production

And now your app is live! How cool is that?!

To explain what's going on, when you call deploy, Zappa will automatically package up your application and local virtual environment into a Lambda-compatible archive, replace any dependencies with versions with wheels compatible with lambda, set up the function handler and necessary WSGI Middleware, upload the archive to S3, create and manage the necessary Amazon IAM policies and roles, register it as a new Lambda function, create a new API Gateway resource, create WSGI-compatible routes for it, link it to the new Lambda function, and finally delete the archive from your S3 bucket. Handy!

Be aware that the default IAM role and policy created for executing Lambda applies a liberal set of permissions. These are most likely not appropriate for production deployment of important applications. See the section Custom AWS IAM Roles and Policies for Execution for more detail.

Updates

If your application has already been deployed and you only need to upload new Python code, but not touch the underlying routes, you can simply:

$ zappa update production
Updating..
Your application is now live at: https://7k6anj0k99.execute-api.us-east-1.amazonaws.com/production

This creates a new archive, uploads it to S3 and updates the Lambda function to use the new code, but doesn't touch the API Gateway routes.

Docker Workflows

In version 0.53.0, support was added to deploy & update Lambda functions using Docker.

You can specify an ECR image using the --docker-image-uri option to the zappa command on deploy and update. Zappa expects that the image is built and pushed to a Amazon ECR repository.

Deploy Example:

$ zappa deploy --docker-image-uri {AWS ACCOUNT ID}.dkr.ecr.{REGION}.amazonaws.com/{REPOSITORY NAME}:latest

Update Example:

$ zappa update --docker-image-uri {AWS ACCOUNT ID}.dkr.ecr.{REGION}.amazonaws.com/{REPOSITORY NAME}:latest

Refer to the blog post for more details about how to leverage this functionality, and when you may want to.

If you are using a custom Docker image for your Lambda runtime (e.g. if you want to use a newer version of Python that is not yet supported by Lambda out of the box) and you would like to bypass the Python version check, you can set an environment variable to do so:

$ export ZAPPA_RUNNING_IN_DOCKER=True

You can also add this to your Dockerfile like this:

ENV ZAPPA_RUNNING_IN_DOCKER=True

Rollback

You can also rollback the deployed code to a previous version by supplying the number of revisions to return to. For instance, to rollback to the version deployed 3 versions ago:

$ zappa rollback production -n 3

Scheduling

Zappa can be used to easily schedule functions to occur on regular intervals. This provides a much nicer, maintenance-free alternative to Celery! These functions will be packaged and deployed along with your app_function and called from the handler automatically. Just list your functions and the expression to schedule them using cron or rate syntax in your zappa_settings.json file:

{
    "production": {
       ...
       "events": [{
           "function": "your_module.your_function", // The function to execute
           "expression": "rate(1 minute)" // When to execute it (in cron or rate format)
       }],
       ...
    }
}

And then:

$ zappa schedule production

And now your function will execute every minute!

If you want to cancel these, you can simply use the unschedule command:

$ zappa unschedule production

And now your scheduled event rules are deleted.

See the example for more details.

Advanced Scheduling

Multiple Expressions

Sometimes a function needs multiple expressions to describe its schedule. To set multiple expressions, simply list your functions, and the list of expressions to schedule them using cron or rate syntax in your zappa_settings.json file:

{
    "production": {
       ...
       "events": [{
           "function": "your_module.your_function", // The function to execute
           "expressions": ["cron(0 20-23 ? * SUN-THU *)", "cron(0 0-8 ? * MON-FRI *)"] // When to execute it (in cron or rate format)
       }],
       ...
    }
}

This can be used to deal with issues arising from the UTC timezone crossing midnight during business hours in your local timezone.

It should be noted that overlapping expressions will not throw a warning, and should be checked for, to prevent duplicate triggering of functions.

Disabled Event

Sometimes an event should be scheduled, yet disabled. For example, perhaps an event should only run in your production environment, but not sandbox. You may still want to deploy it to sandbox to ensure there is no issue with your expression(s) before deploying to production.

In this case, you can disable it from running by setting enabled to false in the event definition:

{
    "sandbox": {
       ...
       "events": [{
           "function": "your_module.your_function", // The function to execute
           "expression": "rate(1 minute)", // When to execute it (in cron or rate format)
           "enabled": false
       }],
       ...
    }
}

Undeploy

If you need to remove the API Gateway and Lambda function that you have previously published, you can simply:

$ zappa undeploy production

You will be asked for confirmation before it executes.

If you enabled CloudWatch Logs for your API Gateway service and you don't want to keep those logs, you can specify the --remove-logs argument to purge the logs for your API Gateway and your Lambda function:

$ zappa undeploy production --remove-logs

Package

If you want to build your application package without actually uploading and registering it as a Lambda function, you can use the package command:

$ zappa package production

If you have a zip callback in your callbacks setting, this will also be invoked.

{
    "production": { // The name of your stage
        "callbacks": {
            "zip": "my_app.zip_callback"// After creating the package
        }
    }
}

You can also specify the output filename of the package with -o:

$ zappa package production -o my_awesome_package.zip

How Zappa Makes Packages

Zappa will automatically package your active virtual environment into a package which runs smoothly on AWS Lambda.

During this process, it will replace any local dependencies with AWS Lambda compatible versions. Dependencies are included in this order:

  • Lambda-compatible manylinux wheels from a local cache
  • Lambda-compatible manylinux wheels from PyPI
  • Packages from the active virtual environment
  • Packages from the local project directory

It also skips certain unnecessary files, and ignores any .py files if .pyc files are available.

In addition, Zappa will also automatically set the correct execution permissions, configure package settings, and create a unique, auditable package manifest file.

To further reduce the final package file size, you can:

Template

Similarly to package, if you only want the API Gateway CloudFormation template, use the template command:

$ zappa template production --l your-lambda-arn -r your-role-arn

Note that you must supply your own Lambda ARN and Role ARNs in this case, as they may not have been created for you.

You can get the JSON output directly with --json, and specify the output file with --output.

Status

If you need to see the status of your deployment and event schedules, simply use the status command.

$ zappa status production

Tailing Logs

You can watch the logs of a deployment by calling the tail management command.

$ zappa tail production

By default, this will show all log items. In addition to HTTP and other events, anything printed to stdout or stderr will be shown in the logs.

You can use the argument --http to filter for HTTP requests, which will be in the Apache Common Log Format.

$ zappa tail production --http

Similarly, you can do the inverse and only show non-HTTP events and log messages:

$ zappa tail production --non-http

If you don't like the default log colors, you can turn them off with --no-color.

You can also limit the length of the tail with --since, which accepts a simple duration string:

$ zappa tail production --since 4h # 4 hours
$ zappa tail production --since 1m # 1 minute
$ zappa tail production --since 1mm # 1 month

You can filter out the contents of the logs with --filter, like so:

$ zappa tail production --http --filter "POST" # Only show POST HTTP requests

Note that this uses the CloudWatch Logs filter syntax.

To tail logs without following (to exit immediately after displaying the end of the requested logs), pass --disable-keep-open:

$ zappa tail production --since 1h --disable-keep-open

Remote Function Invocation

You can execute any function in your application directly at any time by using the invoke command.

For instance, suppose you have a basic application in a file called "my_app.py", and you want to invoke a function in it called "my_function". Once your application is deployed, you can invoke that function at any time by calling:

$ zappa invoke production my_app.my_function

Any remote print statements made and the value the function returned will then be printed to your local console. Nifty!

You can also invoke interpretable Python 3.8/3.9/3.10/3.11/3.12 strings directly by using --raw, like so:

$ zappa invoke production "print(1 + 2 + 3)" --raw

For instance, it can come in handy if you want to create your first superuser on a RDS database running in a VPC (like Serverless Aurora):

$ zappa invoke staging "from django.contrib.auth import get_user_model; User = get_user_model(); User.objects.create_superuser('username', 'email', 'password')" --raw

Django Management Commands

As a convenience, Zappa can also invoke remote Django 'manage.py' commands with the manage command. For instance, to perform the basic Django status check:

$ zappa manage production showmigrations admin

Obviously, this only works for Django projects which have their settings properly defined.

For commands which have their own arguments, you can also pass the command in as a string, like so:

$ zappa manage production "shell --version"

Commands which require direct user input, such as createsuperuser, should be replaced by commands which use zappa invoke <env> --raw.

For more Django integration, take a look at the zappa-django-utils project.

SSL Certification

Zappa can be deployed to custom domain names and subdomains with custom SSL certificates, Let's Encrypt certificates, and AWS Certificate Manager (ACM) certificates.

Currently, the easiest of these to use are the AWS Certificate Manager certificates, as they are free, self-renewing, and require the least amount of work.

Once configured as described below, all of these methods use the same command:

$ zappa certify

When deploying from a CI/CD system, you can use:

$ zappa certify --yes

to skip the confirmation prompt.

Deploying to a Domain With AWS Certificate Manager

Amazon provides their own free alternative to Let's Encrypt called AWS Certificate Manager (ACM). To use this service with Zappa:

  1. Verify your domain in the AWS Certificate Manager console.
  2. In the console, select the N. Virginia (us-east-1) region and request a certificate for your domain or subdomain (sub.yourdomain.tld), or request a wildcard domain (*.yourdomain.tld).
  3. Copy the entire ARN of that certificate and place it in the Zappa setting certificate_arn.
  4. Set your desired domain in the domain setting.
  5. Call $ zappa certify to create and associate the API Gateway distribution using that certificate.

Deploying to a Domain With a Let's Encrypt Certificate (DNS Auth)

If you want to use Zappa on a domain with a free Let's Encrypt certificate using automatic Route 53 based DNS Authentication, you can follow this handy guide.

Deploying to a Domain With a Let's Encrypt Certificate (HTTP Auth)

If you want to use Zappa on a domain with a free Let's Encrypt certificate using HTTP Authentication, you can follow this guide.

However, it's now far easier to use Route 53-based DNS authentication, which will allow you to use a Let's Encrypt certificate with a single $ zappa certify command.

Deploying to a Domain With Your Own SSL Certs

  1. The first step is to create a custom domain and obtain your SSL cert / key / bundle.
  2. Ensure you have set the domain setting within your Zappa settings JSON - this will avoid problems with the Base Path mapping between the Custom Domain and the API invoke URL, which gets the Stage Name appended in the URI
  3. Add the paths to your SSL cert / key / bundle to the certificate, certificate_key, and certificate_chain settings, respectively, in your Zappa settings JSON
  4. Set route53_enabled to false if you plan on using your own DNS provider, and not an AWS Route53 Hosted zone.
  5. Deploy or update your app using Zappa
  6. Run $ zappa certify to upload your certificates and register the custom domain name with your API gateway.

Executing in Response to AWS Events

Similarly, you can have your functions execute in response to events that happen in the AWS ecosystem, such as S3 uploads, DynamoDB entries, Kinesis streams, SNS messages, and SQS queues.

In your zappa_settings.json file, define your event sources and the function you wish to execute. For instance, this will execute your_module.process_upload_function in response to new objects in your my-bucket S3 bucket. Note that process_upload_function must accept event and context parameters.

{
    "production": {
       ...
       "events": [{
            "function": "your_module.process_upload_function",
            "event_source": {
                  "arn":  "arn:aws:s3:::my-bucket",
                  "events": [
                    "s3:ObjectCreated:*" // Supported event types: http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html#supported-notification-event-types
                  ],
                  "key_filters": [{ // optional
                    "type": "suffix",
                    "value": "yourfile.json"
                  },
                  {
                    "type": "prefix",
                    "value": "prefix/for/your/object"
                  }]
               }
            }],
       ...
    }
}

And then:

$ zappa schedule production

And now your function will execute every time a new upload appears in your bucket!

To access the key's information in your application context, you'll want process_upload_function to look something like this:

import boto3
s3_client = boto3.client('s3')

def process_upload_function(event, context):
    """
    Process a file upload.
    """

    # Get the uploaded file's information
    bucket = event['Records'][0]['s3']['bucket']['name'] # Will be `my-bucket`
    key = event['Records'][0]['s3']['object']['key'] # Will be the file path of whatever file was uploaded.

    # Get the bytes from S3
    s3_client.download_file(bucket, key, '/tmp/' + key) # Download this file to writable tmp space.
    file_bytes = open('/tmp/' + key).read()

Similarly, for a Simple Notification Service event:

        "events": [
            {
                "function": "your_module.your_function",
                "event_source": {
                    "arn":  "arn:aws:sns:::your-event-topic-arn",
                    "events": [
                        "sns:Publish"
                    ]
                }
            }
        ]

Optionally you can add SNS message filters:

        "events": [
            {
                "function": "your_module.your_function",
                "event_source": {
                    "arn":  "arn:aws:sns:::your-event-topic-arn",
                    "filters": {
                        "interests": ["python", "aws", "zappa"],
                        "version": ["1.0"]
                    },
                    ...
                }
            }
        ]

DynamoDB and Kinesis are slightly different as it is not event based but pulling from a stream:

       "events": [
           {
               "function": "replication.replicate_records",
               "event_source": {
                    "arn":  "arn:aws:dynamodb:us-east-1:1234554:table/YourTable/stream/2016-05-11T00:00:00.000",
                    "starting_position": "TRIM_HORIZON", // Supported values: TRIM_HORIZON, LATEST
                    "batch_size": 50, // Max: 1000
                    "enabled": true // Default is false
               }
           }
       ]

SQS is also pulling messages from a stream. At this time, only "Standard" queues can trigger lambda events, not "FIFO" queues. Read the AWS Documentation carefully since Lambda calls the SQS DeleteMessage API on your behalf once your function completes successfully.

       "events": [
           {
               "function": "your_module.process_messages",
               "event_source": {
                    "arn":  "arn:aws:sqs:us-east-1:12341234:your-queue-name-arn",
                    "batch_size": 10, // Max: 10. Use 1 to trigger immediate processing
                    "enabled": true // Default is false
               }
           }
       ]

For configuring Lex Bot's intent triggered events:

    "bot_events": [
        {
            "function": "lexbot.handlers.book_appointment.handler",
            "event_source": {
                "arn": "arn:aws:lex:us-east-1:01234123123:intent:TestLexEventNames:$LATEST", // optional. In future it will be used to configure the intent
                "intent":"intentName", // name of the bot event configured
                "invocation_source":"DialogCodeHook", // either FulfillmentCodeHook or DialogCodeHook
            }
        }
    ]

Events can also take keyword arguments:

       "events": [
            {
                "function": "your_module.your_recurring_function", // The function to execute
                "kwargs": {"key": "val", "key2": "val2"},  // Keyword arguments to pass. These are available in the event
                "expression": "rate(1 minute)" // When to execute it (in cron or rate format)
            }
       ]

To get the keyword arguments you will need to look inside the event dictionary:

def your_recurring_function(event, context):
    my_kwargs = event.get("kwargs")  # dict of kwargs given in zappa_settings file

You can find more example event sources here.

Asynchronous Task Execution

Zappa also now offers the ability to seamlessly execute functions asynchronously in a completely separate AWS Lambda instance!

For example, if you have a Flask API for ordering a pie, you can call your bake function seamlessly in a completely separate Lambda instance by using the zappa.asynchronous.task decorator like so:

from flask import Flask
from zappa.asynchronous import task
app = Flask(__name__)

@task
def make_pie():
    """ This takes a long time! """
    ingredients = get_ingredients()
    pie = bake(ingredients)
    deliver(pie)

@app.route('/api/order/pie')
def order_pie():
    """ This returns immediately! """
    make_pie()
    return "Your pie is being made!"

And that's it! Your API response will return immediately, while the make_pie function executes in a completely different Lambda instance.

When calls to @task decorated functions or the zappa.asynchronous.run command occur outside of Lambda, such as your local dev environment, the functions will execute immediately and locally. The zappa asynchronous functionality only works when in the Lambda environment or when specifying Remote Invocations.

Catching Exceptions

Putting a try..except block on an asynchronous task like this:

@task
def make_pie():
    try:
        ingredients = get_ingredients()
        pie = bake(ingredients)
        deliver(pie)
    except Fault as error:
        """send an email"""
    ...
    return Response('Web services down', status=503)

will cause an email to be sent twice for the same error. See asynchronous retries at AWS. To work around this side-effect, and have the fault handler execute only once, change the return value to:

@task
def make_pie():
    try:
        """code block"""
    except Fault as error:
        """send an email"""
    ...
    return {} #or return True

Task Sources

By default, this feature uses direct AWS Lambda invocation. You can instead use AWS Simple Notification Service as the task event source by using the task_sns decorator, like so:

from zappa.asynchronous import task_sns
@task_sns

Using SNS also requires setting the following settings in your zappa_settings:

{
  "dev": {
    ..
      "async_source": "sns", // Source of async tasks. Defaults to "lambda"
      "async_resources": true, // Create the SNS topic to use. Defaults to true.
    ..
    }
}

This will automatically create and subscribe to the SNS topic the code will use when you call the zappa schedule command.

Using SNS will also return a message ID in case you need to track your invocations.

Direct Invocation

You can also use this functionality without a decorator by passing your function to zappa.asynchronous.run, like so:

from zappa.asynchronous import run

run(your_function, args, kwargs) # Using Lambda
run(your_function, args, kwargs, service='sns') # Using SNS

Remote Invocations

By default, Zappa will use lambda's current function name and current AWS region. If you wish to invoke a lambda with a different function name/region or invoke your lambda from outside of lambda, you must specify the remote_aws_lambda_function_name and remote_aws_region arguments so that the application knows which function and region to use. For example, if some part of our pizza making application had to live on an EC2 instance, but we wished to call the make_pie() function on its own Lambda instance, we would do it as follows:

@task(remote_aws_lambda_function_name='pizza-pie-prod', remote_aws_region='us-east-1')
def make_pie():
   """ This takes a long time! """
   ingredients = get_ingredients()
   pie = bake(ingredients)
   deliver(pie)

If those task() parameters were not used, then EC2 would execute the function locally. These same remote_aws_lambda_function_name and remote_aws_region arguments can be used on the zappa.asynchronous.run() function as well.

Restrictions

The following restrictions to this feature apply:

  • Functions must have a clean import path -- i.e. no closures, lambdas, or methods.
  • args and kwargs must be JSON-serializable.
  • The JSON-serialized arguments must be within the size limits for Lambda (256K) or SNS (256K) events.

All of this code is still backwards-compatible with non-Lambda environments - it simply executes in a blocking fashion and returns the result.

Running Tasks in a VPC

If you're running Zappa in a Virtual Private Cloud (VPC), you'll need to configure your subnets to allow your lambda to communicate with services inside your VPC as well as the public Internet. A minimal setup requires two subnets.

In subnet-a:

  • Create a NAT
  • Create an Internet gateway
  • In the route table, create a route pointing the Internet gateway to 0.0.0.0/0.

In subnet-b:

  • Place your lambda function
  • In the route table, create a route pointing the NAT that belongs to subnet-a to 0.0.0.0/0.

You can place your lambda in multiple subnets that are configured the same way as subnet-b for high availability.

Some helpful resources are this tutorial, this other tutorial and this AWS doc page.

Responses

It is possible to capture the responses of Asynchronous tasks.

Zappa uses DynamoDB as the backend for these.

To capture responses, you must configure a async_response_table in zappa_settings. This is the DynamoDB table name. Then, when decorating with @task, pass capture_response=True.

Async responses are assigned a response_id. This is returned as a property of the LambdaAsyncResponse (or SnsAsyncResponse) object that is returned by the @task decorator.

Example:

from zappa.asynchronous import task, get_async_response
from flask import Flask, make_response, abort, url_for, redirect, request, jsonify
from time import sleep

app = Flask(__name__)

@app.route('/payload')
def payload():
    delay = request.args.get('delay', 60)
    x = longrunner(delay)
    return redirect(url_for('response', response_id=x.response_id))

@app.route('/async-response/<response_id>')
def response(response_id):
    response = get_async_response(response_id)
    if response is None:
        abort(404)

    if response['status'] == 'complete':
        return jsonify(response['response'])

    sleep(5)

    return "Not yet ready. Redirecting.", 302, {
        'Content-Type': 'text/plain; charset=utf-8',
        'Location': url_for('response', response_id=response_id, backoff=5),
        'X-redirect-reason': "Not yet ready.",
    }

@task(capture_response=True)
def longrunner(delay):
    sleep(float(delay))
    return {'MESSAGE': "It took {} seconds to generate this.".format(delay)}

Advanced Settings

There are other settings that you can define in your local settings to change Zappa's behavior. Use these at your own risk!

 {
    "dev": {
        "additional_text_mimetypes": [], // allows you to provide additional mimetypes to be handled as text when binary_support is true.
        "alb_enabled": false, // enable provisioning of application load balancing resources. If set to true, you _must_ fill out the alb_vpc_config option as well.
        "alb_vpc_config": {
            "CertificateArn": "your_acm_certificate_arn", // ACM certificate ARN for ALB
            "SubnetIds": [], // list of subnets for ALB
            "SecurityGroupIds": [] // list of security groups for ALB
        },
        "api_key_required": false, // enable securing API Gateway endpoints with x-api-key header (default False)
        "api_key": "your_api_key_id", // optional, use an existing API key. The option "api_key_required" must be true to apply
        "apigateway_enabled": true, // Set to false if you don't want to create an API Gateway resource. Default true.
        "apigateway_description": "My funky application!", // Define a custom description for the API Gateway console. Default None.
        "assume_policy": "my_assume_policy.json", // optional, IAM assume policy JSON file
        "attach_policy": "my_attach_policy.json", // optional, IAM attach policy JSON file
        "apigateway_policy": "my_apigateway_policy.json", // optional, API Gateway resource policy JSON file
        "async_source": "sns", // Source of async tasks. Defaults to "lambda"
        "async_resources": true, // Create the SNS topic and DynamoDB table to use. Defaults to true.
        "async_response_table": "your_dynamodb_table_name",  // the DynamoDB table name to use for captured async responses; defaults to None (can't capture)
        "async_response_table_read_capacity": 1,  // DynamoDB table read capacity; defaults to 1
        "async_response_table_write_capacity": 1,  // DynamoDB table write capacity; defaults to 1
        "aws_endpoint_urls": { "aws_service_name": "endpoint_url" }, // a dictionary of endpoint_urls that emulate the appropriate service.  Usually used for testing, for instance with `localstack`.
        "aws_environment_variables" : {"your_key": "your_value"}, // A dictionary of environment variables that will be available to your deployed app via AWS Lambdas native environment variables. See also "environment_variables" and "remote_env" . Default {}.
        "aws_kms_key_arn": "your_aws_kms_key_arn", // Your AWS KMS Key ARN
        "aws_region": "aws-region-name", // optional, uses region set in profile or environment variables if not set here,
        "binary_support": true, // Enable automatic MIME-type based response encoding through API Gateway. Default true.
        "callbacks": { // Call custom functions during the local Zappa deployment/update process
            "settings": "my_app.settings_callback", // After loading the settings
            "zip": "my_app.zip_callback", // After creating the package
            "post": "my_app.post_callback", // After command has executed
        },
        "cache_cluster_enabled": false, // Use APIGW cache cluster (default False)
        "cache_cluster_size": 0.5, // APIGW Cache Cluster size (default 0.5)
        "cache_cluster_ttl": 300, // APIGW Cache Cluster time-to-live (default 300)
        "cache_cluster_encrypted": false, // Whether or not APIGW Cache Cluster encrypts data (default False)
        "certificate": "my_cert.crt", // SSL certificate file location. Used to manually certify a custom domain
        "certificate_key": "my_key.key", // SSL key file location. Used to manually certify a custom domain
        "certificate_chain": "my_cert_chain.pem", // SSL certificate chain file location. Used to manually certify a custom domain
        "certificate_arn": "arn:aws:acm:us-east-1:1234512345:certificate/aaaa-bbb-cccc-dddd", // ACM certificate ARN (needs to be in us-east-1 region).
        "cloudwatch_log_level": "OFF", // Enables/configures a level of logging for the given staging. Available options: "OFF", "INFO", "ERROR", default "OFF".
        "cloudwatch_data_trace": false, // Logs all data about received events. Default false.
        "cloudwatch_metrics_enabled": false, // Additional metrics for the API Gateway. Default false.
        "cognito": { // for Cognito event triggers
            "user_pool": "user-pool-id", // User pool ID from AWS Cognito
            "triggers": [{
                "source": "PreSignUp_SignUp", // triggerSource from http://docs.aws.amazon.com/cognito/latest/developerguide/cognito-user-identity-pools-working-with-aws-lambda-triggers.html#cognito-user-pools-lambda-trigger-syntax-pre-signup
                "function": "my_app.pre_signup_function"
            }]
        },
        "context_header_mappings": { "HTTP_header_name": "API_Gateway_context_variable" }, // A dictionary mapping HTTP header names to API Gateway context variables
        "cors": false, // Enable Cross-Origin Resource Sharing. Default false. If true, simulates the "Enable CORS" button on the API Gateway console. Can also be a dictionary specifying lists of "allowed_headers", "allowed_methods", and string of "allowed_origin"
        "dead_letter_arn": "arn:aws:<sns/sqs>:::my-topic/queue", // Optional Dead Letter configuration for when Lambda async invoke fails thrice
        "debug": true, // Print Zappa configuration errors tracebacks in the 500. Default true.
        "delete_local_zip": true, // Delete the local zip archive after code updates. Default true.
        "delete_s3_zip": true, // Delete the s3 zip archive. Default true.
        "django_settings": "your_project.production_settings", // The modular path to your Django project's settings. For Django projects only.
        "domain": "yourapp.yourdomain.com", // Required if you're using a domain
        "base_path": "your-base-path", // Optional base path for API gateway custom domain base path mapping. Default None. Not supported for use with Application Load Balancer event sources.
        "environment_variables": {"your_key": "your_value"}, // A dictionary of environment variables that will be available to your deployed app. See also "remote_env" and "aws_environment_variables". Default {}.
        "events": [
            {   // Recurring events
                "function": "your_module.your_recurring_function", // The function to execute (Pattern: [._A-Za-z0-9]+).
                "expression": "rate(1 minute)" // When to execute it (in cron or rate format)
            },
            {   // AWS Reactive events
                "function": "your_module.your_reactive_function", // The function to execute (Pattern: [._A-Za-z0-9]+).
                "event_source": {
                    "arn":  "arn:aws:s3:::my-bucket", // The ARN of this event source
                    "events": [
                        "s3:ObjectCreated:*" // The specific event to execute in response to.
                    ]
                }
            }
        ],
        "endpoint_configuration": ["EDGE", "REGIONAL", "PRIVATE"],  // Specify APIGateway endpoint None (default) or list `EDGE`, `REGION`, `PRIVATE`
        "exception_handler": "your_module.report_exception", // function that will be invoked in case Zappa sees an unhandled exception raised from your code
        "exclude": ["file.gz", "tests"], // A list of filename patterns to exclude from the archive (see `fnmatch` module for patterns).
        "exclude_glob": ["*.gz", "*.rar", "tests/**/*"], // A list of glob patterns to exclude from the archive. To exclude boto3 and botocore (available in an older version on Lambda), add "boto3*" and "botocore*".
        "extends": "stage_name", // Duplicate and extend another stage's settings. For example, `dev-asia` could extend from `dev-common` with a different `s3_bucket` value.
        "extra_permissions": [{ // Attach any extra permissions to this policy. Default None
            "Effect": "Allow",
            "Action": ["rekognition:*"], // AWS Service ARN
            "Resource": "*"
        }],
        "iam_authorization": false, // optional, use IAM to require request signing. Default false. Note that enabling this will override the authorizer configuration.
        "include": ["your_special_library_to_load_at_handler_init"], // load special libraries into PYTHONPATH at handler init that certain modules cannot find on path
        "authorizer": {
            "function": "your_module.your_auth_function", // Local function to run for token validation. For more information about the function see below.
            "arn": "arn:aws:lambda:<region>:<account_id>:function:<function_name>", // Existing Lambda function to run for token validation.
            "result_ttl": 300, // Optional. Default 300. The time-to-live (TTL) period, in seconds, that specifies how long API Gateway caches authorizer results. Currently, the maximum TTL value is 3600 seconds.
            "token_header": "Authorization", // Optional. Default 'Authorization'. The name of a custom authorization header containing the token that clients submit as part of their requests.
            "validation_expression": "^Bearer \\w+$", // Optional. A validation expression for the incoming token, specify a regular expression.
        },
        "keep_warm": true, // Create CloudWatch events to keep the server warm. Default true. To remove, set to false and then `unschedule`.
        "keep_warm_expression": "rate(4 minutes)", // How often to execute the keep-warm, in cron and rate format. Default 4 minutes.
        "lambda_description": "Your Description", // However you want to describe your project for the AWS console. Default "Zappa Deployment".
        "lambda_handler": "your_custom_handler", // The name of Lambda handler. Default: handler.lambda_handler
        "layers": ["arn:aws:lambda:<region>:<account_id>:layer:<layer_name>:<layer_version>"], // optional lambda layers
        "lambda_concurrency": 10, // Sets the maximum number of simultaneous executions for a function, and reserves capacity for that concurrency level. Default is None.
        "lets_encrypt_key": "s3://your-bucket/account.key", // Let's Encrypt account key path. Can either be an S3 path or a local file path.
        "log_level": "DEBUG", // Set the Zappa log level. Can be one of CRITICAL, ERROR, WARNING, INFO and DEBUG. Default: DEBUG
        "manage_roles": true, // Have Zappa automatically create and define IAM execution roles and policies. Default true. If false, you must define your own IAM Role and role_name setting.
        "memory_size": 512, // Lambda function memory in MB. Default 512.
        "ephemeral_storage": { "Size": 512 }, // Lambda function ephemeral_storage size in MB, Default 512, Max 10240
        "num_retained_versions":null, // Indicates the number of old versions to retain for the lambda. If absent, keeps all the versions of the function.
        "payload_compression": true, // Whether or not to enable API gateway payload compression (default: true)
        "payload_minimum_compression_size": 0, // The threshold size (in bytes) below which payload compression will not be applied (default: 0)
        "prebuild_script": "your_module.your_function", // Function to execute before uploading code
        "profile_name": "your-profile-name", // AWS profile credentials to use. Default 'default'. Removing this setting will use the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables instead.
        "project_name": "MyProject", // The name of the project as it appears on AWS. Defaults to a slugified `pwd`.
        "remote_env": "s3://my-project-config-files/filename.json", // optional file in s3 bucket containing a flat json object which will be used to set custom environment variables.
        "role_name": "MyLambdaRole", // Name of Zappa execution role. Default <project_name>-<env>-ZappaExecutionRole. To use a different, pre-existing policy, you must also set manage_roles to false.
        "role_arn": "arn:aws:iam::12345:role/app-ZappaLambdaExecutionRole", // ARN of Zappa execution role. Default to None. To use a different, pre-existing policy, you must also set manage_roles to false. This overrides role_name. Use with temporary credentials via GetFederationToken.
        "route53_enabled": true, // Have Zappa update your Route53 Hosted Zones when certifying with a custom domain. Default true.
        "runtime": "python3.12", // Python runtime to use on Lambda. Can be one of: "python3.8", "python3.9", "python3.10", "python3.11", or "python3.12". Defaults to whatever the current Python being used is.
        "s3_bucket": "dev-bucket", // Zappa zip bucket,
        "slim_handler": false, // Useful if project >50M. Set true to just upload a small handler to Lambda and load actual project from S3 at runtime. Default false.
        "settings_file": "~/Projects/MyApp/settings/dev_settings.py", // Server side settings file location,
        "tags": { // Attach additional tags to AWS Resources
            "Key": "Value",  // Example Key and value
            "Key2": "Value2",
            },
        "timeout_seconds": 30, // Maximum lifespan for the Lambda function (default 30, max 900.)
        "touch": true, // GET the production URL upon initial deployment (default True)
        "touch_path": "/", // The endpoint path to GET when checking the initial deployment (default "/")
        "use_precompiled_packages": true, // If possible, use C-extension packages which have been pre-compiled for AWS Lambda. Default true.
        "vpc_config": { // Optional Virtual Private Cloud (VPC) configuration for Lambda function
            "SubnetIds": [ "subnet-12345678" ], // Note: not all availability zones support Lambda!
            "SecurityGroupIds": [ "sg-12345678" ]
        },
        "xray_tracing": false // Optional, enable AWS X-Ray tracing on your lambda function.
    }
}

YAML Settings

If you prefer YAML over JSON, you can also use a zappa_settings.yml, like so:

---
dev:
  app_function: your_module.your_app
  s3_bucket: your-code-bucket
  events:
  - function: your_module.your_function
    event_source:
      arn: arn:aws:s3:::your-event-bucket
      events:
      - s3:ObjectCreated:*

You can also supply a custom settings file at any time with the -s argument, ex:

$ zappa deploy dev -s my-custom-settings.yml

Similarly, you can supply a zappa_settings.toml file:

[dev]
  app_function = "your_module.your_app"
  s3_bucket = "your-code-bucket"

Advanced Usage

Keeping The Server Warm

Zappa will automatically set up a regularly occurring execution of your application in order to keep the Lambda function warm. This can be disabled via the keep_warm setting.

Serving Static Files / Binary Uploads

Zappa is now able to serve and receive binary files, as detected by their MIME-type.

However, generally Zappa is designed for running your application code, not for serving static web assets. If you plan on serving custom static assets in your web application (CSS/JavaScript/images/etc.,), you'll likely want to use a combination of AWS S3 and AWS CloudFront.

Your web application framework will likely be able to handle this for you automatically. For Flask, there is Flask-S3, and for Django, there is Django-Storages.

Similarly, you may want to design your application so that static binary uploads go directly to S3, which then triggers an event response defined in your events setting! That's thinking serverlessly!

Enabling CORS

The simplest way to enable CORS (Cross-Origin Resource Sharing) for your Zappa application is to set cors to true in your Zappa settings file and update, which is the equivalent of pushing the "Enable CORS" button in the AWS API Gateway console. This is disabled by default, but you may wish to enable it for APIs which are accessed from other domains, etc.

You can also simply handle CORS directly in your application. Your web framework will probably have an extension to do this, such as django-cors-headers or Flask-CORS. Using these will make your code more portable.

Large Projects

AWS currently limits Lambda zip sizes to 50 megabytes. If your project is larger than that, set slim_handler: true in your zappa_settings.json. In this case, your fat application package will be replaced with a small handler-only package. The handler file then pulls the rest of the large project down from S3 at run time! The initial load of the large project may add to startup overhead, but the difference should be minimal on a warm lambda function. Note that this will also eat into the storage space of your application function. Note that AWS supports custom /tmp directory storage size in a range of 512 - 10240 MB. Use ephemeral_storage in zappa_settings.json to adjust to your needs if your project is larger than default 512 MB.

Enabling Bash Completion

Bash completion can be enabled by adding the following to your .bashrc:

  eval "$(register-python-argcomplete zappa)"

register-python-argcomplete is provided by the argcomplete Python package. If this package was installed in a virtualenv then the command must be run there. Alternatively you can execute:

activate-global-python-argcomplete --dest=- > file

The file's contents should then be sourced in e.g. ~/.bashrc.

Enabling Secure Endpoints on API Gateway

API Key

You can use the api_key_required setting to generate an API key to all the routes of your API Gateway. The process is as follows:

  1. Deploy/redeploy (update won't work) and write down the id for the key that has been created
  2. Go to AWS console > Amazon API Gateway and
    • select "API Keys" and find the key value (for example key_value)
    • select "Usage Plans", create a new usage plan and link the API Key and the API that Zappa has created for you
  3. Send a request where you pass the key value as a header called x-api-key to access the restricted endpoints (for example with curl: curl --header "x-api-key: key_value"). Note that without the x-api-key header, you will receive a 403.

IAM Policy

You can enable IAM-based (v4 signing) authorization on an API by setting the iam_authorization setting to true. Your API will then require signed requests and access can be controlled via IAM policy. Unsigned requests will receive a 403 response, as will requesters who are not authorized to access the API. Enabling this will override the Authorizer configuration (see below).

API Gateway Lambda Authorizers

If you deploy an API endpoint with Zappa, you can take advantage of API Gateway Lambda Authorizers to implement a token-based authentication - all you need to do is to provide a function to create the required output, Zappa takes care of the rest. A good start for the function is the AWS Labs blueprint example.

If you are wondering for what you would use an Authorizer, here are some potential use cases:

  1. Call out to OAuth provider
  2. Decode a JWT token inline
  3. Lookup in a self-managed DB (for example DynamoDB)

Zappa can be configured to call a function inside your code to do the authorization, or to call some other existing lambda function (which lets you share the authorizer between multiple lambdas). You control the behavior by specifying either the arn or function_name values in the authorizer settings block.

For example, to get the Cognito identity, add this to a zappa_settings.yaml:

  context_header_mappings:
    user_id: authorizer.user_id

Which can now be accessed in Flask like this:

from flask import request

@route('/hello')
def hello_world:
   print(request.headers.get('user_id'))

Cognito User Pool Authorizer

You can also use AWS Cognito User Pool Authorizer by adding:

{
    "authorizer": {
        "type": "COGNITO_USER_POOLS",
        "provider_arns": [
            "arn:aws:cognito-idp:{region}:{account_id}:userpool/{user_pool_id}"
        ]
    }
}

API Gateway Resource Policy

You can also use API Gateway Resource Policies. Example of IP Whitelisting:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "execute-api:/*",
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": [
                        "1.2.3.4/32"
                    ]
                }
            }
        }
    ]
}

Setting Environment Variables

Local Environment Variables

If you want to set local environment variables for a deployment stage, you can simply set them in your zappa_settings.json:

{
    "dev": {
        ...
        "environment_variables": {
            "your_key": "your_value"
        }
    },
    ...
}

You can then access these inside your application with:

import os
your_value = os.environ.get('your_key')

If your project needs to be aware of the type of environment you're deployed to, you'll also be able to get SERVERTYPE (AWS Lambda), FRAMEWORK (Zappa), PROJECT (your project name) and STAGE (dev, production, etc.) variables at any time.

Remote AWS Environment Variables

If you want to use native AWS Lambda environment variables you can use the aws_environment_variables configuration setting. These are useful as you can easily change them via the AWS Lambda console or cli at runtime. They are also useful for storing sensitive credentials and to take advantage of KMS encryption of environment variables.

During development, you can add your Zappa defined variables to your locally running app by, for example, using the below (for Django, to manage.py).

import json
import os

if os.environ.get('AWS_LAMBDA_FUNCTION_NAME') is None: # Ensures app is NOT running on Lambda
    json_data = open('zappa_settings.json')
    env_vars = json.load(json_data)['dev']['environment_variables']
    os.environ.update(env_vars)

Remote Environment Variables

Any environment variables that you have set outside of Zappa (via AWS Lambda console or cli) will remain as they are when running update, unless they are also in aws_environment_variables, in which case the remote value will be overwritten by the one in the settings file. If you are using KMS-encrypted AWS environment variables, you can set your KMS Key ARN in the aws_kms_key_arn setting. Make sure that the values you set are encrypted in such case.

Note: if you rely on these as well as environment_variables, and you have the same key names, then those in environment_variables will take precedence as they are injected in the lambda handler.

Remote Environment Variables (via an S3 file)

S3 remote environment variables were added to Zappa before AWS introduced native environment variables for Lambda (via the console and cli). Before going down this route check if above make more sense for your usecase.

If you want to use remote environment variables to configure your application (which is especially useful for things like sensitive credentials), you can create a file and place it in an S3 bucket to which your Zappa application has access. To do this, add the remote_env key to zappa_settings pointing to a file containing a flat JSON object, so that each key-value pair on the object will be set as an environment variable and value whenever a new lambda instance spins up.

For example, to ensure your application has access to the database credentials without storing them in your version control, you can add a file to S3 with the connection string and load it into the lambda environment using the remote_env configuration setting.

super-secret-config.json (uploaded to my-config-bucket):

{
    "DB_CONNECTION_STRING": "super-secret:database"
}

zappa_settings.json:

{
    "dev": {
        ...
        "remote_env": "s3://my-config-bucket/super-secret-config.json",
    },
    ...
}

Now in your application you can use:

import os
db_string = os.environ.get('DB_CONNECTION_STRING')

API Gateway Context Variables

If you want to map an API Gateway context variable (http://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html) to an HTTP header you can set up the mapping in zappa_settings.json:

{
    "dev": {
        ...
        "context_header_mappings": {
            "HTTP_header_name": "API_Gateway_context_variable"
        }
    },
    ...
}

For example, if you want to expose the $context.identity.cognitoIdentityId variable as the HTTP header CognitoIdentityId, and $context.stage as APIStage, you would have:

{
    "dev": {
        ...
        "context_header_mappings": {
            "CognitoIdentityId": "identity.cognitoIdentityId",
            "APIStage": "stage"
        }
    },
    ...
}

Catching Unhandled Exceptions

By default, if an unhandled exception happens in your code, Zappa will just print the stacktrace into a CloudWatch log. If you wish to use an external reporting tool to take note of those exceptions, you can use the exception_handler configuration option.

zappa_settings.json:

{
    "dev": {
        ...
        "exception_handler": "your_module.unhandled_exceptions",
    },
    ...
}

The function has to accept three arguments: exception, event, and context:

your_module.py

def unhandled_exceptions(e, event, context):
    send_to_raygun(e, event)  # gather data you need and send
    return True # Prevent invocation retry

You may still need a similar exception handler inside your application, this is just a way to catch exception which happen at the Zappa/WSGI layer (typically event-based invocations, misconfigured settings, bad Lambda packages, and permissions issues).

By default, AWS Lambda will attempt to retry an event based (non-API Gateway, e.g. CloudWatch) invocation if an exception has been thrown. However, you can prevent this by returning True, as in example above, so Zappa that will not re-raise the uncaught exception, thus preventing AWS Lambda from retrying the current invocation.

Using Custom AWS IAM Roles and Policies

Custom AWS IAM Roles and Policies for Deployment

You can specify which local profile to use for deploying your Zappa application by defining the profile_name setting, which will correspond to a profile in your AWS credentials file.

Custom AWS IAM Roles and Policies for Execution

The default IAM policy created by Zappa for executing the Lambda is very permissive. It grants access to all actions for all resources for types CloudWatch, S3, Kinesis, SNS, SQS, DynamoDB, and Route53; lambda:InvokeFunction for all Lambda resources; Put to all X-Ray resources; and all Network Interface operations to all EC2 resources. While this allows most Lambdas to work correctly with no extra permissions, it is generally not an acceptable set of permissions for most continuous integration pipelines or production deployments. Instead, you will probably want to manually manage your IAM policies.

To manually define the policy of your Lambda execution role, you must set manage_roles to false and define either the role_name or role_arn in your Zappa settings file.

{
    "dev": {
        ...
        "manage_roles": false, // Disable Zappa client managing roles.
        "role_name": "MyLambdaRole", // Name of your Zappa execution role. Optional, default: <project_name>-<env>-ZappaExecutionRole.
        "role_arn": "arn:aws:iam::12345:role/app-ZappaLambdaExecutionRole", // ARN of your Zappa execution role. Optional.
        ...
    },
    ...
}

Ongoing discussion about the minimum policy requirements necessary for a Zappa deployment can be found here. A more robust solution to managing these entitlements will likely be implemented soon.

To add permissions to the default Zappa execution policy, use the extra_permissions setting:

{
    "dev": {
        ...
        "extra_permissions": [{ // Attach any extra permissions to this policy.
            "Effect": "Allow",
            "Action": ["rekognition:*"], // AWS Service ARN
            "Resource": "*"
        }]
    },
    ...
}

AWS X-Ray

Zappa can enable AWS X-Ray support on your function with a configuration setting:

{
    "dev": {
        ...
        "xray_tracing": true
    },
    ...
}

This will enable it on the Lambda function and allow you to instrument your code with X-Ray. For example, with Flask:

from aws_xray_sdk.core import xray_recorder

app = Flask(__name__)

xray_recorder.configure(service='my_app_name')

@route('/hello')
@xray_recorder.capture('hello')
def hello_world:
    return 'Hello'

You may use the capture decorator to create subsegments around functions, or xray_recorder.begin_subsegment('subsegment_name') and xray_recorder.end_subsegment() within a function. The official X-Ray documentation for Python has more information on how to use this with your code.

Note that you may create subsegments in your code but an exception will be raised if you try to create a segment, as it is created by the lambda worker. This also means that if you use Flask you must not use the XRayMiddleware the documentation suggests.

Globally Available Server-less Architectures

Global Zappa Slides

Click to see slides from ServerlessConf London!

During the init process, you will be given the option to deploy your application "globally." This will allow you to deploy your application to all available AWS regions simultaneously in order to provide a consistent global speed, increased redundancy, data isolation, and legal compliance. You can also choose to deploy only to "primary" locations, the AWS regions with -1 in their names.

To learn more about these capabilities, see these slides from ServerlessConf London.

Raising AWS Service Limits

Out of the box, AWS sets a limit of 1000 concurrent executions for your functions. If you start to breach these limits, you may start to see errors like ClientError: An error occurred (LimitExceededException) when calling the PutTargets.." or something similar.

To avoid this, you can file a service ticket with Amazon to raise your limits up to the many tens of thousands of concurrent executions which you may need. This is a fairly common practice with Amazon, designed to prevent you from accidentally creating extremely expensive bug reports. So, before raising your service limits, make sure that you don't have any rogue scripts which could accidentally create tens of thousands of parallel executions that you don't want to pay for.

Dead Letter Queues

If you want to utilise AWS Lambda's Dead Letter Queue feature simply add the key dead_letter_arn, with the value being the complete ARN to the corresponding SNS topic or SQS queue in your zappa_settings.json.

You must have already created the corresponding SNS/SQS topic/queue, and the Lambda function execution role must have been provisioned with read/publish/sendMessage access to the DLQ resource.

Unique Package ID

For monitoring of different deployments, a unique UUID for each package is available in package_info.json in the root directory of your application's package. You can use this information or a hash of this file for such things as tracking errors across different deployments, monitoring status of deployments and other such things on services such as Sentry and New Relic. The package will contain:

{
  "build_platform": "darwin",
  "build_user": "frank",
  "build_time": "1509732511",
  "uuid": "9c2df9e6-30f4-4c0a-ac4d-4ecb51831a74"
}

Application Load Balancer Event Source

Zappa can be used to handle events triggered by Application Load Balancers (ALB). This can be useful in a few circumstances:

  • Since API Gateway has a hard limit of 30 seconds before timing out, you can use an ALB for longer running requests.
  • API Gateway is billed per-request; therefore, costs can become excessive with high throughput services. ALBs pricing model makes much more sense financially if you're expecting a lot of traffic to your Lambda.
  • ALBs can be placed within a VPC, which may make more sense for private endpoints than using API Gateway's private model (using AWS PrivateLink).

Like API Gateway, Zappa can automatically provision ALB resources for you. You'll need to add the following to your zappa_settings:

"alb_enabled": true,
"alb_vpc_config": {
    "CertificateArn": "arn:aws:acm:us-east-1:[your-account-id]:certificate/[certificate-id]",
    "SubnetIds": [
        // Here, you'll want to provide a list of subnets for your ALB, eg. 'subnet-02a58266'
    ],
    "SecurityGroupIds": [
        // And here, a list of security group IDs, eg. 'sg-fbacb791'
    ]
}

More information on using ALB as an event source for Lambda can be found here.

An important note: right now, Zappa will provision ONE lambda to ONE load balancer, which means using base_path along with ALB configuration is currently unsupported.

Endpoint Configuration

API Gateway can be configured to be only accessible in a VPC. To enable this; configure your VPC to support then set the endpoint_configuration to PRIVATE and set up Resource Policy on the API Gateway. A note about this; if you're using a private endpoint, Zappa won't be able to tell if the API is returning a successful status code upon deploy or update, so you'll have to check it manually to ensure your setup is working properly.

For full list of options for endpoint configuration refer to API Gateway EndpointConfiguration documentation

Example Private API Gateway configuration

zappa_settings.json:

{
    "dev": {
        ...
        "endpoint_configuration": ["PRIVATE"],
        "apigateway_policy": "apigateway_resource_policy.json",
        ...
    },
    ...
}

apigateway_resource_policy.json:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "execute-api:/*",
            "Condition": {
                "StringNotEquals": {
                    "aws:sourceVpc": "{{vpcID}}" // UPDATE ME
                }
            }
        },
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "execute-api:Invoke",
            "Resource": "execute-api:/*"
        }
    ]
}

Cold Starts (Experimental)

Lambda may provide additional resources than provisioned during cold start initialization. Set INSTANTIATE_LAMBDA_HANDLER_ON_IMPORT=True to instantiate the lambda handler on import. This is an experimental feature - if startup time is critical, look into using Provisioned Concurrency.

Zappa Guides

Zappa in the Press

Sites Using Zappa

Are you using Zappa? Let us know and we'll list your site here!

Related Projects

  • Mackenzie - AWS Lambda Infection Toolkit
  • NoDB - A simple, server-less, Pythonic object store based on S3.
  • zappa-cms - A tiny server-less CMS for busy hackers. Work in progress.
  • zappa-django-utils - Utility commands to help Django deployments.
  • flask-ask - A framework for building Amazon Alexa applications. Uses Zappa for deployments.
  • zappa-file-widget - A Django plugin for supporting binary file uploads in Django on Zappa.
  • zops - Utilities for teams and continuous integrations using Zappa.
  • cookiecutter-mobile-backend - A cookiecutter Django project with Zappa and S3 uploads support.
  • zappa-examples - Flask, Django, image uploads, and more!
  • zappa-hug-example - Example of a Hug application using Zappa.
  • Zappa Docker Image - A Docker image for running Zappa locally, based on Lambda Docker.
  • zappa-dashing - Monitor your AWS environment (health/metrics) with Zappa and CloudWatch.
  • s3env - Manipulate a remote Zappa environment variable key/value JSON object file in an S3 bucket through the CLI.
  • zappa_resize_image_on_fly - Resize images on the fly using Flask, Zappa, Pillow, and OpenCV-python.
  • zappa-ffmpeg - Run ffmpeg inside a lambda for serverless transformations.
  • gdrive-lambda - pass json data to a csv file for end users who use Gdrive across the organization.
  • travis-build-repeat - Repeat TravisCI builds to avoid stale test results.
  • wunderskill-alexa-skill - An Alexa skill for adding to a Wunderlist.
  • xrayvision - Utilities and wrappers for using AWS X-Ray with Zappa.
  • terraform-aws-zappa - Terraform modules for creating a VPC, RDS instance, ElastiCache Redis and CloudFront Distribution for use with Zappa.
  • zappa-sentry - Integration with Zappa and Sentry
  • IOpipe - Monitor, profile and analyze your Zappa apps.

Hacks

Zappa goes quite far beyond what Lambda and API Gateway were ever intended to handle. As a result, there are quite a few hacks in here that allow it to work. Some of those include, but aren't limited to..

  • Using VTL to map body, headers, method, params and query strings into JSON, and then turning that into valid WSGI.
  • Attaching response codes to response bodies, Base64 encoding the whole thing, using that as a regex to route the response code, decoding the body in VTL, and mapping the response body to that.
  • Packing and Base58 encoding multiple cookies into a single cookie because we can only map one kind.
  • Forcing the case permutations of "Set-Cookie" in order to return multiple headers at the same time.
  • Turning cookie-setting 301/302 responses into 200 responses with HTML redirects, because we have no way to set headers on redirects.

Contributing

Contributions are very welcome!

Please file tickets for discussion before submitting patches. Pull requests should target master and should leave Zappa in a "shippable" state if merged.

If you are adding a non-trivial amount of new code, please include a functioning test in your PR. For AWS calls, we use the placebo library, which you can learn to use in their README. The test suite will be run by Travis CI once you open a pull request.

Please include the GitHub issue or pull request URL that has discussion related to your changes as a comment in the code (example). This greatly helps for project maintainability, as it allows us to trace back use cases and explain decision making. Similarly, please make sure that you meet all of the requirements listed in the pull request template.

Please feel free to work on any open ticket, especially any ticket marked with the "help-wanted" label. If you get stuck or want to discuss an issue further, please join our Slack channel, where you'll find a community of smart and interesting people working dilligently on hard problems. Zappa Slack Auto Invite

Zappa does not intend to conform to PEP8, isolate your commits so that changes to functionality with changes made by your linter.

Using a Local Repo

To use the git HEAD, you probably can't use pip install -e . Instead, you should clone the repo to your machine and then pip install /path/to/zappa/repo or ln -s /path/to/zappa/repo/zappa zappa in your local project.

zappa's People

Contributors

aectann avatar aehlke avatar bcongdon avatar doerge avatar doppins-bot avatar geeknam avatar jakul avatar javulticat avatar jneves avatar lewismm avatar longzhi avatar mathom avatar mcrowson avatar mialinx avatar michi88 avatar millarm avatar miserlou avatar monkut avatar puhitaku avatar rgov avatar richiverse avatar schuyler1d avatar scoates avatar songgithub avatar timj98 avatar vascop avatar wobeng avatar wrboyce avatar xuefeng-zhu avatar youcandanch avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

zappa's Issues

Community Fork Time!

Context

It's been 4 months since this has seen any activity from a maintainer AFAIK, and I have multiple projects that depend on this.

I don't know about other users, but I think I'm getting to the point where I'd rather we made a community fork than just wait for the docs to come up again/some of the issues be resolved.

I say we, as I've never run an open source project and frankly don't have a clue what I'd be doing with it, but I can probably make some time to help out.

Expected Behavior

The project should be maintained, docs should be available some communication between users and maintainers would also be good. AFAIK The slack is down, the blog is down and the project is increasingly "rotting on the vine".

Actual Behavior

It's been dead since for ~ 4 months.

Possible Fix

Members of the community make a fork - organise and get to work.

(I assume this is what you wanted? @jneves )

[Migrated] No module named 'docutils'

Originally from: Miserlou/Zappa#2205 by tbbottle

When trying to deploy a django application that depends on the docutils package, the following error is raised by the Django application:


Request Method: | GET
-- | --
https://....execute-api.us-east-1.amazonaws.com/dev/
3.0.1
ModuleNotFoundError
No module named 'docutils'
/tmp/.../readme_renderer/rst.py in <module>, line 18
/var/lang/bin/python3.8
3.8.6
['/tmp/expi',  '/var/task',  '/opt/python/lib/python3.8/site-packages',  '/opt/python',  '/var/runtime',  '/var/lang/lib/python38.zip',  '/var/lang/lib/python3.8',  '/var/lang/lib/python3.8/lib-dynload',  '/var/lang/lib/python3.8/site-packages',  '/opt/python/lib/python3.8/site-packages',  '/opt/python',  '/var/task']

Context

I did my best to figure out why: rebuilt my virtual environments from scratch, updated my package versions, moved from Python 3.6 to Python 3.8. The issue persisted: whenever I opened the zip file the package was missing.

I tried copying the docutils directory from my virtual environment to the root of my project, so it would be in the root of the zip. It still wouldn't show up.

Eventually I went looking in the Zappa source:

(.env) $ find .env/lib/python3.8/site-packages/zappa/ -type f -name "*.py" | xargs grep docu                                                          
.env/lib/python3.8/site-packages/zappa//core.py:    '*.hg', 'pip', 'docutils*', 'setuputils*', '__pycache__/*'

Which leads to core.py:

  197 # We never need to include these.
  198 # Related: https://github.com/Miserlou/Zappa/pull/56
  199 # Related: https://github.com/Miserlou/Zappa/pull/581
  200 ZIP_EXCLUDES = [
  201     '*.exe', '*.DS_Store', '*.Python', '*.git', '.git/*', '*.zip', '*.tar.gz',
  202     '*.hg', 'pip', 'docutils*', 'setuputils*', '__pycache__/*'
  203 ]

Thanks for providing context: reviewing the pull requests it looks like docutils should exist in the lambda environment without being included in the zappa package. However, it doesn't appear to be in my lambda environment.

I am not clear is this is me mis-configuring things, choosing bad versions, or that something has changed and the package is in fact no longer available.

All the environment setup has been handled by Zappa. Other than creating an execution policy and IAM user, zappa has done it all.

This is the third project I am deploying with zappa and it is awesome - it has saved me much time and many headaches while also helping me learn about AWS in a more structured way than I would have otherwise. Thank you.

Expected Behavior

Python applications should be able to depend on, and use, the docutils package in a zappa deployed lambda environment.

Actual Behavior

The docutils module cannot be found in the zappa created lambda environment.

Possible Fix

I have been trying to find a list from Amazon of the packages included by default in the lambda environment but haven't been successful. If docutils is not included (i.e. it isn't just me), removing the "docutils*" entry from ZIP_EXCLUDES solves the problem. I have confirmed this in my environment.

If it is just me, it would be nice if these excludes were documented somewhere other than the pull request (which wasn't hitting my web searches); I thought I was going crazy with docutils not appearing in the package.

Steps to Reproduce

  1. Create a new function that import docutils and returns success.
  2. Deploy the function using zappa deploy dev.
  3. Visit the endpoint (or look at the touch results) - > result 500

Your Environment

  • Zappa version used: zappa --version 0.52.0
  • Operating System and Python version: Mac OSX 10.13.6 - Python 3.8.5
  • The output of pip freeze:
alabaster==0.7.12
argcomplete==1.12.2
asgiref==3.2.3
Babel==2.8.0
bleach==3.1.0
boto3==1.16.63
botocore==1.19.63
ccy==1.1.0
certifi==2019.11.28
cffi==1.14.0
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
cmarkgfm==0.4.2
Django==3.0.1
django-crispy-forms==1.8.1
django-extensions==2.2.5
django-storages==1.8
djangorestframework==3.11.0
docutils==0.16
durationpy==0.5
future==0.18.2
geoip2==3.0.0
gunicorn==20.0.4
hjson==3.0.2
idna==2.8
imagesize==1.2.0
importlib-metadata==3.4.0
Jinja2==2.10.3
jmespath==0.9.4
kappa==0.6.0
MarkupSafe==1.1.1
maxminddb==1.5.2
packaging==19.2
pip-tools==5.5.0
placebo==0.9.0
psycopg2-binary==2.8.6
pycparser==2.19
Pygments==2.5.2
pyparsing==2.4.6
python-dateutil==2.8.1
python-slugify==4.0.1
pytz==2019.3
PyYAML==5.4.1
readme-renderer==28.0
requests==2.22.0
s3transfer==0.3.4
six==1.13.0
snowballstemmer==2.0.0
Sphinx==2.3.1
sphinx-rtd-theme==0.4.3
sphinxcontrib-applehelp==1.0.1
sphinxcontrib-devhelp==1.0.1
sphinxcontrib-htmlhelp==1.0.2
sphinxcontrib-jsmath==1.0.1
sphinxcontrib-qthelp==1.0.2
sphinxcontrib-serializinghtml==1.1.3
sqlparse==0.3.0
stop-words==2018.7.23
text-unidecode==1.3
toml==0.10.2
tqdm==4.56.0
troposphere==2.6.3
typing-extensions==3.7.4.3
urllib3==1.25.7
webencodings==0.5.1
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.52.0
zipp==3.4.0
  • Link to your project (optional): if you need me to, I can create an empty project that depends on docutils and publish it - don't hesitate to ask.
  • Your zappa_settings.json:
{
    "dev": {
        "aws_region": "us-east-1",
        "aws_environment_variables": {
            "RDS_HOST": "project.place.us-east-1.rds.amazonaws.com",
            "RDS_USER": "project",
            "RDS_NAME": "project",
        },
        "django_settings": "project.settings",
        "exclude": [".env", ".git", "__pycache__", "db.sqlite3", "*.zip", "*.tar.gz"],
        "include": [],
        "profile_name": "zappa-project",
        "project_name": "project",
        "runtime": "python3.8",
        "s3_bucket": "zappa-project",
        "slim_handler": false,
        "delete_local_zip": false,
        "vpc": {
            "SubnetIds": ["subnet-aaa"],
            "SecurityGroupIds": ["sg-aaa"]
        }
    }
}

[Migrated] Blitz Testing

Originally from: Miserlou/Zappa#24 by Miserlou

I'm most excited to use Blitz.io (or something similar) to start load-testing Zappa applications.

If we do it right, we should see a perfectly flat line!

This probably depends on profiling and optimizing (as well as improving unit test coverage) before we invest in this, but I think it will really really help to sell the benefits of Zappa.

[Migrated] zappa packaging issue[timeout error and logging issue]

Originally from: Miserlou/Zappa#2181 by siva-annam

Context

I have a sample flask application that I want to deploy on the AWS environment(API-Gateway+Lambda) using Zappa.
My project structure will be like this

Backend
     misc
        project1
        project2
        etc ....
     flask-project
        backend_logging.py
         main.py
         requirements.txt
         zappa_settings.json

zappa_settings.json file will be like this

{
    "dev": {
        "app_function": "main.app",
        "keep_warm" : false,
        "debug" : true,
        "log_level" : "DEBUG",
        "profile_name" : "dev",
        "aws_region": "us-east-1",
        "project_name": "flask-project",
        "runtime": "python3.6",
        "s3_bucket": "zappa_bucket_123",
        "use_precompiled_packages" : true
    }
}

My Requirements.txt file will be like this

-i https://pypi.org/simple
boto3
amazon-dax-client
cachetools
flask-bootstrap
flask-login
flask-wtf
flask
html2text
pycryptodomex
python-dateutil
python-dynamodb-lock
python-jose[cryptography]
pytz
pyyaml
requests
shopifyapi
urllib3
redis-py-cluster
msgpack
rule-engine
zstandard
inflection
pycryptodome
pyfiglet
pyparsing
pytest
ujson
zappa
gevent
werkzeug

i gone to Backend/flask-project/ path and performed these steps

virtualenv .venv
. .venv/bin/activate

pip3 install zappa
pip3 install -r requirements.txt

zappa package dev -o artifact.zip

After performing these steps, zappa created a zip file for me namely artifact.zip(around 80 MB)

My AWS environment has some CI/CD setup, so i just need to upload this artifact.zip file into S3. Then CI/CD will take care of invoking API Gateway and lambda whenever i send HTTP Request.

Expected Behavior

  1. whenever i send HTTP request, it needs to provide proper response
  2. whenever my flask code is running, it needs to write the logging messages present in my flask application into cloudwatch as logs

Actual Behavior

Issue-1
whenever i send HTTP request, after 29 seconds, it is returning as Execution failed due to a timeout error

Note:

  1. Based on my research, i found that API-Gateway has a time limit of 29 seconds. if API-Gateway didn't get any response from lambda with in 29 seconds, API Gateway returns timeout error
  2. Based on my research, i found another information. If the lambda function is large in size, it will take more time to process.

After knowing these two points, i thought of reducing the size of my zip file. I gone to zipped file and noticed that, whenever i run zappa package dev -o artifact.zip , it is not only packaging my flask application code it is also packaging misc(projects) which are available in upper level directory of my flask project

I tried using exclude option of zappa_settings.json but no success. i endup with 80MB zip file .

Here i want a way to only package my flask application code and requirments necessary for this application not other project codes
When i send the HTTP request, i want proper response not timeout errors

Issue-2
when i see the cloudwatch logs of lambda function, i am seeing logging messages present in my flask application as print messages in cloudwatch not in the form of logs.
for example

Expected logging behaviour:
2020-11-01T15:31:43+0000 - DEBUG - - PROCESSING - START : processing started siva

Actual logging behaviour:
2020-11-01T15:31:43+0000  - - PROCESSING - START : processing started siva

if we see above example, i am seeing logging mesages present in my flask-application but there is no information about whether it is DEBUG,INFO OR ERROR etc.

Possible Fix

I don't have any idea.

Your Environment

  • Zappa version used: zappa 0.52.0
  • Operating System and Python version: Linux and Python 3.6
  • The output of pip freeze:
  • Link to your project (optional):
  • Your zappa_settings.json:

Can any one suggest a solution for this?

[Migrated] Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code

Originally from: https://github.com/Miserlou/Zappa/issues/721294991 by lightmagic1

Context

I'm having issues deploying my project using Zappa

Expected Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. Success!

Actual Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. I get the following error:

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.

Using Zappa tail:

"Failed to find library: libmysqlclient.so.18...right filename?"

Possible Fix

Steps to Reproduce

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: Ubuntu 20.04 / Python 3.7.7
  • The output of pip freeze:
alembic==1.0.8
aniso8601==6.0.0
argcomplete==1.9.3
astroid==2.2.5
autopep8==1.4.4
Babel==2.6.0
boto3==1.9.153
botocore==1.12.153
cachetools==3.1.1
certifi==2019.9.11
cfn-flip==1.2.0
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
emoji==0.5.4
Flask==1.0.2
Flask-Cors==3.0.7
Flask-Migrate==2.4.0
Flask-RESTful==0.3.7
Flask-SQLAlchemy==2.3.2
future==0.16.0
google-api-core==1.14.3
google-auth==1.7.0
google-cloud-speech==1.2.0
google-cloud-texttospeech==0.5.0
googleapis-common-protos==1.6.0
grpcio==1.25.0
hjson==3.0.1
idna==2.8
importlib-metadata==1.7.0
isort==4.3.16
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
Mako==1.0.8
MarkupSafe==1.1.1
mccabe==0.6.1
mysql-connector-python==8.0.21
pip-tools==5.3.1
placebo==0.9.0
protobuf==3.10.0
pyasn1==0.4.7
pyasn1-modules==0.2.7
pycodestyle==2.5.0
pylint==2.3.1
python-dateutil==2.6.1
python-editor==1.0.4
python-slugify==1.2.4
pytz==2019.3
PyYAML==5.1
requests==2.22.0
rsa==4.0
s3transfer==0.2.0
six==1.13.0
SQLAlchemy==1.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
typed-ast==1.3.1
Unidecode==1.0.23
urllib3==1.24
Werkzeug==0.15.1
wrapt==1.11.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
  • Link to your project (optional):
  • Your zappa_settings.json:
    "dev": {
        "app_function": "app.app",
        "aws_region": "us-east-1",
        "profile_name": "default",
        "project_name": "webhook",
        "runtime": "python3.7",
        "s3_bucket": "zappa-6132y6l",
        "slim_handler": true
    },

[Migrated] An uncaught exception happened while servicing this request

Originally from: Miserlou/Zappa#2186 by adamrozencwajg

Hello. After deploying with zappa deploy dev I am now receiving the following when invoking the endpoint in browser:

"{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the zappa tail command.', 'traceback': ['Traceback (most recent call last):\n', ' File "/var/task/handler.py", line 540, in handler\n with Response.from_app(self.wsgi_app, environ) as response:\n', ' File "/var/task/werkzeug/wrappers/base_response.py", line 287, in from_app\n return cls(*_run_wsgi_app(app, environ, buffered))\n', ' File "/var/task/werkzeug/wrappers/base_response.py", line 26, in _run_wsgi_app\n return _run_wsgi_app(*args)\n', ' File "/var/task/werkzeug/test.py", line 1119, in run_wsgi_app\n app_rv = app(environ, start_response)\n', "TypeError: 'NoneType' object is not callable\n"]}"

my settings are as follows:

{
"dev": {
"aws_region": "us-east-1",
"django_settings": "groms_site.settings",
"profile_name": "default",
"project_name": "groms-site",
"runtime": "python3.7",
"s3_bucket": "zappa-0jwgxep72",
"slim_handler": true,
"environment_variables":{
"DJANGO_SETTINGS_MODULE": "groms_site.settings",
},
}
}

any help would be much appreciated. best, adam

[Migrated] Odd Quirk in API Gateway Cache Cluster Use

Originally from: Miserlou/Zappa#281 by NathanLawrence

While working on #183, I seem to have noticed another odd behavior in API Gateway caching. Perhaps this is my own mistake with the WSGI configuration or I need to be customizing some Swagger JSON to make this happen correctly (as suggested in #116), but API Gateway seems to be over-caching and ignoring additional parameters.

For example, a hit to a flask-RESTful endpoint at [base staged URI]/items/1224 is returning the items listing you would expect if you asked for [base staged URI]/items.

Thoughts?

[Migrated] Lets Encrypt Certify Issue - 'NoneType' object has no attribute 'groups'

Originally from: Miserlou/Zappa#2197 by ksummersill2

I've followed the handy guide reference here to associate a Let's Encrypt SSL certificate and validating it using DNS validation.
https://github.com/Miserlou/Zappa/blob/master/docs/domain_with_free_ssl_dns.md

However, I am receiving the following error message when running:
zappa certify dev

The response given back:

(jobatscale) λ zappa certify dev
Calling certify for stage dev..
Are you sure you want to certify? [y/n] y
Certifying domain dev.jobatscale.com..
'NoneType' object has no attribute 'groups'
Failed to generate or install a certificate! :(

Expected Behavior

Should be able to certified the SSL certificate from the Route 53 domain name and access the application utilizing the specified domain name.

Actual Behavior

Receiving error message.

(jobatscale) λ zappa certify dev
Calling certify for stage dev..
Are you sure you want to certify? [y/n] y
Certifying domain dev.jobatscale.com..
'NoneType' object has no attribute 'groups'
Failed to generate or install a certificate! :(

Possible Fix

I am attempted to try and fix the lets-encrypt.py file for the issue with groups but not really sure where to begin.

Steps to Reproduce

Reproduce by following the guide in the repo: https://github.com/Miserlou/Zappa/blob/master/docs/domain_with_free_ssl_dns.md

Your Environment

  • Zappa version used: 0.52.0

  • Operating System and Python version: Windows 10 / Python 3.8.7

  • The output of pip freeze:
    argcomplete==1.12.2
    boto3==1.16.49
    botocore==1.19.49
    certifi==2020.12.5
    cfn-flip==1.2.3
    chardet==4.0.0
    click==7.1.2
    durationpy==0.5
    Flask==1.1.2
    Flask-PyMongo==2.3.0
    future==0.18.2
    hjson==3.0.2
    idna==2.10
    itsdangerous==1.1.0
    Jinja2==2.11.2
    jmespath==0.10.0
    kappa==0.6.0
    MarkupSafe==1.1.1
    pip-tools==5.5.0
    placebo==0.9.0
    pymongo==3.11.2
    python-dateutil==2.8.1
    python-slugify==4.0.1
    PyYAML==5.3.1
    requests==2.25.1
    s3transfer==0.3.3
    six==1.15.0
    text-unidecode==1.3
    toml==0.10.2
    tqdm==4.55.1
    troposphere==2.6.3
    urllib3==1.26.2
    Werkzeug==0.16.1
    wsgi-request-logger==0.4.6
    zappa==0.52.0

  • Your zappa_settings.json:
    {
    "dev": {
    "app_function": "app.app",
    "parameter_depth": 1,
    "aws_region": "us-east-1",
    "profile_name": "jobatscale",
    "project_name": "jobsatscale",
    "runtime": "python3.7",
    "manage_roles": false,
    "s3_bucket": "jobsatscale",
    "domain": "dev.jobatscale.com",
    "lets_encrypt_key": "account.key",
    "lets_encrypt_expression": "rate(30 days)"
    },
    "dev_ap_east_1": {
    "aws_region": "ap-east-1",
    "extends": "dev"
    },
    "dev_ap_northeast_1": {
    "aws_region": "ap-northeast-1",
    "extends": "dev"
    },
    "dev_ap_northeast_2": {
    "aws_region": "ap-northeast-2",
    "extends": "dev"
    },
    "dev_ap_northeast_3": {
    "aws_region": "ap-northeast-3",
    "extends": "dev"
    },
    "dev_ap_south_1": {
    "aws_region": "ap-south-1",
    "extends": "dev"
    },
    "dev_ap_southeast_1": {
    "aws_region": "ap-southeast-1",
    "extends": "dev"
    },
    "dev_ap_southeast_2": {
    "aws_region": "ap-southeast-2",
    "extends": "dev"
    },
    "dev_ca_central_1": {
    "aws_region": "ca-central-1",
    "extends": "dev"
    },
    "dev_cn_north_1": {
    "aws_region": "cn-north-1",
    "extends": "dev"
    },
    "dev_cn_northwest_1": {
    "aws_region": "cn-northwest-1",
    "extends": "dev"
    },
    "dev_eu_central_1": {
    "aws_region": "eu-central-1",
    "extends": "dev"
    },
    "dev_eu_north_1": {
    "aws_region": "eu-north-1",
    "extends": "dev"
    },
    "dev_eu_west_1": {
    "aws_region": "eu-west-1",
    "extends": "dev"
    },
    "dev_eu_west_2": {
    "aws_region": "eu-west-2",
    "extends": "dev"
    },
    "dev_eu_west_3": {
    "aws_region": "eu-west-3",
    "extends": "dev"
    },
    "dev_sa_east_1": {
    "aws_region": "sa-east-1",
    "extends": "dev"
    },
    "dev_us_east_2": {
    "aws_region": "us-east-2",
    "extends": "dev"
    },
    "dev_us_gov_east_1": {
    "aws_region": "us-gov-east-1",
    "extends": "dev"
    },
    "dev_us_gov_west_1": {
    "aws_region": "us-gov-west-1",
    "extends": "dev"
    },
    "dev_us_west_1": {
    "aws_region": "us-west-1",
    "extends": "dev"
    },
    "dev_us_west_2": {
    "aws_region": "us-west-2",
    "extends": "dev"
    },
    "staging": {
    "app_function": "app.app",
    "parameter_depth": 1,
    "aws_region": "us-east-1",
    "profile_name": "jobatscale",
    "project_name": "jobsatscale",
    "runtime": "python3.7",
    "manage_roles": false,
    "s3_bucket": "jobsatscale",
    "domain": "staging.jobatscale.com",
    "lets_encrypt_key": "account.key",
    "lets_encrypt_expression": "rate(30 days)"
    },
    "production": {
    "app_function": "app.app",
    "parameter_depth": 1,
    "aws_region": "us-east-1",
    "profile_name": "jobatscale",
    "project_name": "jobsatscale",
    "runtime": "python3.7",
    "manage_roles": false,
    "s3_bucket": "jobsatscale",
    "domain": "jobatscale.com",
    "lets_encrypt_key": "account.key",
    "lets_encrypt_expression": "rate(30 days)"
    }
    }

[Migrated] Add Docker Container Image Support

Originally from: Miserlou/Zappa#2188 by ian-whitestone

Earlier this month, AWS announced container image support for AWS Lambda. This means you can now package and deploy lambda functions as container images, instead of using zip files. The container image based approach will solve a lot of headaches caused by the zip file approach, particularly with file sizes (container images can be up to 10GB) and the dependency issues we all know & love.

In an ideal end state, you should be able to call zappa deploy / zappa update / zappa package (etc.) and specify whether you want to use the traditional zip-based approach or new Docker container based approach. If choosing the latter, Zappa would automatically:

  • Build the new docker image for you
    • Not 100% sure how this would work yet. There is a Python library for docker that could be used. Would need to detect the dependencies a user has in their virtual env. and then install them all in the Docker image creation flow.
    • A simpler alternative could involve a user having a Dockerfile that they point Zappa to, and Zappa just executes the build for that.
  • Pushes the docker image to Amazon's Container Registry solution
    • Automatically creates new repository if one does not exist
  • Creates the lambda function with the new Docker image

For a MVP, we should take a BYOI (bring your own image) approach and just get zappa deploy and zappa update to deploy a lambda function using an existing Docker Image that complies with these guidelines.

[Migrated] Unable to send emails via SES on django app hosted through Zappa(lambda)

Originally from: Miserlou/Zappa#2180 by shrinidhinhegde

so i have hosted my site on lambda using zappa. and i am using django-amazon-ses to send email after submitting a form.

settings.py

AWS_ACCESS_KEY_ID = os.environ.get("AWS_ACCESS_KEY_ID", "my access key")
AWS_SECRET_ACCESS_KEY = os.environ.get("AWS_SECRET_ACCESS_KEY", "my secret key")
DEFAULT_FROM_EMAIL = '[email protected]'
EMAIL_BACKEND = 'django_amazon_ses.EmailBackend'
AWS_SES_REGION = 'ap-south-1'
AWS_SES_REGION_ENDPOINT = 'email-smtp.ap-south-1.amazonaws.com'

view

@csrf_exempt
def formSubmit(request):
    if request.method == 'POST':
        var = json.loads(request.body)
        name = var['name1']
        email = var['email1']
        company = var['company1']
        description = var['description1']
        send_mail('subject',
                  'msg',
                  '[email protected]',
                  [email])
        send_mail('subject',
                  'msg',
                  '[email protected]',
                  ['[email protected]'])
    return JsonResponse({'result': 'done'})

now this works fine on my local host but, when i try to do it online, it shows the following error on submitting.

ClientError at /submit/
An error occurred (InvalidClientTokenId) when calling the SendRawEmail operation: The security token included in the request is invalid.

At first, i thought it is because i havent configued a vpc with the lambda function, but then it showed me the same error after i configured a public/private vpc wizard.

not sure what am i doing wrong. any help would be appreciated.

[Migrated] Getting Gateway Timeouts (504), Despite Setting timeout_seconds to 120

Originally from: Miserlou/Zappa#2182 by JordanTreDaniel

I have deployed a lambda that performs image manipulation, which can take a while sometimes. Longer than 30 seconds is a normal, expected time (for some photos)

At first, the timeout was showing in the zappa tail logs, but now, after setting the timeout_seconds option to 120, it seems as though the Amazon Cloudfront/Amazon Gateway is still enforcing a 30-second limit. I have dug around on AWS, and it seems I can only change this from a Cloudfront console. I don't believe there is an Cloudfront Distribution for this Lambda.

I am running on python 3.6

Expected Behavior

I would think that if I set the timeout_seconds to 120 in the Zappa settings, zappa would apply that setting to not only the lambda, but also whichever gates stand in front of it. (Gateway OR CloudFront Dist)

Actual Behavior

I get 504's when I don't choose to have the lambda downsample the image. (When it's normal for it to take long)

Possible Fix

Is there a way to apply the timeout_seconds to more than just the Lambda itself?

Steps to Reproduce

  1. Go to http://www.rapclouds.com
  2. Sign In (using oauth, super easy)
  3. Search a song
  4. Go to song
  5. Click blue settings button by word cloud
  6. Pick a "mask" (image to make the wordcloud from)
  7. Try diff values between 3 & 0 for the downSample option.
  8. 0 downsampling is almost a guaranteed timeout exception. (Network tab in devtools to verify)

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: I use Mac, but this is only a problem on AWS. (python 3.6)
  • The output of pip freeze:
argcomplete==1.12.0
boto3==1.14.31
botocore==1.17.31
certifi==2020.6.20
cfn-flip==1.2.3
chardet==3.0.4
click==7.1.2
cycler==0.10.0
docutils==0.15.2
durationpy==0.5
Flask==1.1.2
future==0.18.2
hjson==3.0.1
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
kappa==0.6.0
kiwisolver==1.2.0
MarkupSafe==1.1.1
matplotlib==3.3.0
numpy==1.19.1
Pillow==7.2.0
pip-tools==5.3.0
placebo==0.9.0
pyparsing==2.4.7
python-dateutil==2.6.1
python-slugify==4.0.1
PyYAML==5.3.1
requests==2.24.0
s3transfer==0.3.3
scipy==1.5.3
six==1.15.0
text-unidecode==1.3
toml==0.10.1
tqdm==4.48.0
troposphere==2.6.2
urllib3==1.25.10
Werkzeug==0.16.1
wordcloud==1.7.0
wsgi-request-logger==0.4.6
zappa==0.51.0
 {
	"dev": {
		"app_function": "app.app",
		"aws_region": "us-east-1",
		"profile_name": "default",
		"project_name": "wordcloudflask",
		"runtime": "python3.6",
		"s3_bucket": "word-cloud-bucket",
		"slim_handler": true,
		"cors": true,
		"binary_support": false,
		"timeout_seconds": 120
	}
}```

[Migrated] Handling non-dict SNS messages

Originally from: Miserlou/Zappa#2198 by piotrdrazikowski

Description

The main SNS message handler incorrectly assumes that every message that was parsed using json.loads() without throwing an exception is of dict type, and then uses dict methods like .get() on the parsed object which eventually leads to a crash. In our case this has prevented us from sending a list of dicts, but there's more examples. This PR fixes it by handling the AttributeError exception

GitHub Issues

Miserlou/Zappa#1466
Miserlou/Zappa#1864

[Migrated] Unable to import module 'handler': No module named werkzeug.wrappers

Originally from: Miserlou/Zappa#64 by chennav

I tried installing Werkzeug but it seems like it is already installed
~/dev/testzappa% sudo pip install Werkzeug Requirement already satisfied (use --upgrade to upgrade): Werkzeug in /usr/lib/python2.7/site-packages

I guess it isn't packaged into the lambda.
I get the following error in my lambda cloudwatch logs
Unable to import module 'handler': No module named werkzeug.wrappers

[Migrated] Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code

Originally from: Miserlou/Zappa#2178 by lightmagic1

Context

I'm having issues deploying my project using Zappa

Expected Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. Success!

Actual Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. I get the following error:

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.

Using Zappa tail:

"Failed to find library: libmysqlclient.so.18...right filename?"

Possible Fix

Steps to Reproduce

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: Ubuntu 20.04 / Python 3.7.7
  • The output of pip freeze:
alembic==1.0.8
aniso8601==6.0.0
argcomplete==1.9.3
astroid==2.2.5
autopep8==1.4.4
Babel==2.6.0
boto3==1.9.153
botocore==1.12.153
cachetools==3.1.1
certifi==2019.9.11
cfn-flip==1.2.0
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
emoji==0.5.4
Flask==1.0.2
Flask-Cors==3.0.7
Flask-Migrate==2.4.0
Flask-RESTful==0.3.7
Flask-SQLAlchemy==2.3.2
future==0.16.0
google-api-core==1.14.3
google-auth==1.7.0
google-cloud-speech==1.2.0
google-cloud-texttospeech==0.5.0
googleapis-common-protos==1.6.0
grpcio==1.25.0
hjson==3.0.1
idna==2.8
importlib-metadata==1.7.0
isort==4.3.16
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
Mako==1.0.8
MarkupSafe==1.1.1
mccabe==0.6.1
mysql-connector-python==8.0.21
pip-tools==5.3.1
placebo==0.9.0
protobuf==3.10.0
pyasn1==0.4.7
pyasn1-modules==0.2.7
pycodestyle==2.5.0
pylint==2.3.1
python-dateutil==2.6.1
python-editor==1.0.4
python-slugify==1.2.4
pytz==2019.3
PyYAML==5.1
requests==2.22.0
rsa==4.0
s3transfer==0.2.0
six==1.13.0
SQLAlchemy==1.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
typed-ast==1.3.1
Unidecode==1.0.23
urllib3==1.24
Werkzeug==0.15.1
wrapt==1.11.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
  • Link to your project (optional):
  • Your zappa_settings.json:
    "dev": {
        "app_function": "app.app",
        "aws_region": "us-east-1",
        "profile_name": "default",
        "project_name": "webhook",
        "runtime": "python3.7",
        "s3_bucket": "zappa-6132y6l",
        "slim_handler": true
    },

[Migrated] Zappa init not working

Originally from: Miserlou/Zappa#2201 by bradleydp

zappa init does not seem to be working. I've tried on many versions of Python such as 3.8.0, 3.8.7 and 3.9.1

Traceback (most recent call last):
File "c:\users\b\appdata\local\programs\python\python38\lib\site-packages\zappa\cli.py", line 2778, in handle
sys.exit(cli.handle())
File "c:\users\b\appdata\local\programs\python\python38\lib\site-packages\zappa\cli.py", line 483, in handle
self.init()
File "c:\users\b\appdata\local\programs\python\python38\lib\site-packages\zappa\cli.py", line 1576, in init
click.echo(click.style("""\n\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2588\u2557 \u2588\u2588\u2588\u2588\u2588\u2557
File "c:\users\b\appdata\local\programs\python\python38\lib\site-packages\click\utils.py", line 272, in echo
file.write(message)
File "c:\users\b\appdata\local\programs\python\python38\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 2-9: character maps to

[Migrated] Fix when no expressions or keep_warm is false

Originally from: Miserlou/Zappa#2185 by manycoding

When I set keep_warm_expression it fails with

    events=events
  File "/Users/valery/.local/share/virtualenvs/document_data_extract_zappa-j8vc3Vbe/lib/python3.6/site-packages/zappa/core.py", line 2783, in schedule_events
    print("Could not create event {} - Please define either an expression or an event source".format(name))
UnboundLocalError: local variable 'name' referenced before assignment

So I fixed it.

Description

GitHub Issues

[Migrated] PolicyLengthExceededException

Originally from: Miserlou/Zappa#2189 by wangsha

PolicyLengthExceededException occurs when schedule tasks

Context

same issue as aws/chalice#48

Expected Behavior

No duplicated statement in lambda policy.

Actual Behavior

Same statement appended to existing policy for each update.

Possible Fix

Check statement existance.

Steps to Reproduce

with any functioning zappa project

  1. add following entry in zappa config. async_response_table=xxxx, async_source='sns', async_resources=true.
  2. calling zappa schedule and observe policy keeps growning until hit PolicyLengthExceededException

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: 3.7
  • The output of pip freeze:
  • Link to your project (optional):
  • Your zappa_settings.json:

[Migrated] Nosetests are failing on Windows

Originally from: Miserlou/Zappa#251 by mxplusb

Ran the nosetests on Windows, they are currently failing in a virtualenv:

.....E...E
  0%|          | 0/3 [00:00<?, ? endpoint/s]
4 endpoint [00:00, 1000.01 endpoint/s]      
....
  0%|          | 0/3 [00:00<?, ? endpoint/s]
4 endpoint [00:00, 800.02 endpoint/s]       
................E............
======================================================================
ERROR: test_cli_aws (tests.tests.TestZappa)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "F:\Programming\Repositories\Zappa\tests\utils.py", line 49, in wrapper
    return function(*args, **kwargs)
  File "F:\Programming\Repositories\Zappa\tests\tests.py", line 460, in test_cli_aws
    zappa_cli.update()
  File "F:\Programming\Repositories\Zappa\zappa\cli.py", line 319, in update
    self.create_package()
  File "F:\Programming\Repositories\Zappa\zappa\cli.py", line 933, in create_package
    django_py = ''.join(os.path.join([base, os.sep, 'ext', os.sep, 'django.py']))
  File "C:\Users\Mike\zappa-dev\lib\ntpath.py", line 65, in join
    result_drive, result_path = splitdrive(path)
  File "C:\Users\Mike\zappa-dev\lib\ntpath.py", line 116, in splitdrive
    normp = p.replace(altsep, sep)
AttributeError: 'list' object has no attribute 'replace'
-------------------- >> begin captured stdout << ---------------------
This is a prebuild script!
This application is already deployed - did you mean to call update?

This is a prebuild script!
Packaging project as zip...

--------------------- >> end captured stdout << ----------------------
-------------------- >> begin captured logging << --------------------
placebo.pill: DEBUG: attaching to session: Session(region_name='us-east-1')
placebo.pill: DEBUG: datapath: F:\Programming\Repositories\Zappa\tests\placebo\TestZappa.test_cli_aws
botocore.credentials: DEBUG: Skipping environment variable credential check because profile name was explicitly set.
botocore.credentials: DEBUG: Looking for credentials via: env
botocore.credentials: DEBUG: Looking for credentials via: assume-role
botocore.credentials: DEBUG: Looking for credentials via: shared-credentials-file
botocore.credentials: INFO: Found credentials in shared credentials file: ~/.aws/credentials
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\endpoints.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\s3\2006-03-01\service-2.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\_retry.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x0000000005572DD8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\lambda\2015-03-31\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: lambda
botocore.hooks: DEBUG: Event creating-client-class.lambda: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.lambda: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting lambda timeout as (5, 300)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\events\2015-10-07\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: events
botocore.hooks: DEBUG: Event creating-client-class.events: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.events: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting events timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\apigateway\2015-07-09\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: apigateway
botocore.hooks: DEBUG: Event creating-client-class.apigateway: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.apigateway: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting apigateway timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\logs\2014-03-28\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: logs
botocore.hooks: DEBUG: Event creating-client-class.logs: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.logs: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting logs timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\iam\2010-05-08\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\iam\2010-05-08\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading iam:iam
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\s3\2006-03-01\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x0000000005572DD8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading s3:s3
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\cloudwatch\2010-08-01\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: cloudwatch
botocore.hooks: DEBUG: Event creating-client-class.cloudwatch: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.cloudwatch: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting monitoring timeout as (60, 60)
botocore.hooks: DEBUG: Event before-call.lambda.ListVersionsByFunction: calling handler <bound method Pill._mock_request of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _make_request: lambda.ListVersionsByFunction
placebo.pill: DEBUG: load_response: lambda.ListVersionsByFunction
placebo.pill: DEBUG: get_next_file_path: lambda.ListVersionsByFunction
placebo.pill: DEBUG: load_responses: F:\Programming\Repositories\Zappa\tests\placebo\TestZappa.test_cli_aws\lambda.ListVersionsByFunction_1.json
boto3.resources.factory: DEBUG: Loading iam:Role
boto3.resources.action: INFO: Calling iam:get_role with {u'RoleName': 'ZappaLambdaExecution'}
botocore.hooks: DEBUG: Event before-call.iam.GetRole: calling handler <bound method Pill._mock_request of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _make_request: iam.GetRole
placebo.pill: DEBUG: load_response: iam.GetRole
placebo.pill: DEBUG: get_next_file_path: iam.GetRole
placebo.pill: DEBUG: load_responses: F:\Programming\Repositories\Zappa\tests\placebo\TestZappa.test_cli_aws\iam.GetRole_1.json
botocore.hooks: DEBUG: Event after-call.iam.GetRole: calling handler <function json_decode_policies at 0x000000000407BC88>
boto3.resources.action: DEBUG: Response: {u'Role': {u'AssumeRolePolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': u'sts:AssumeRole', u'Principal': {u'Service': [u'events.amazonaws.com', u'lambda.amazonaws.com', u'apigateway.amazonaws.com']}, u'Effect': u'Allow', u'Sid': u''}]}, u'RoleId': u'AROAJP6JO7RI37FHZGQ6A', u'CreateDate': datetime.datetime(2016, 1, 25, 15, 33, 29), u'Path': u'/', u'RoleName': u'ZappaLambdaExecution', u'Arn': u'arn:aws:iam::724336686645:role/ZappaLambdaExecution'}, u'ResponseMetadata': {u'HTTPStatusCode': 200, u'RequestId': u'4d88be55-28f3-11e6-b455-b3e4bc23726a'}}
boto3.resources.factory: DEBUG: Loading iam:RolePolicy
boto3.resources.model: DEBUG: Renaming RolePolicy attribute role_name
boto3.resources.action: INFO: Calling iam:get_role_policy with {u'RoleName': 'ZappaLambdaExecution', u'PolicyName': 'zappa-permissions'}
botocore.hooks: DEBUG: Event before-call.iam.GetRolePolicy: calling handler <bound method Pill._mock_request of <placebo.pill.Pill object at 0x00000000055A9630>>
placebo.pill: DEBUG: _make_request: iam.GetRolePolicy
placebo.pill: DEBUG: load_response: iam.GetRolePolicy
placebo.pill: DEBUG: get_next_file_path: iam.GetRolePolicy
placebo.pill: DEBUG: load_responses: F:\Programming\Repositories\Zappa\tests\placebo\TestZappa.test_cli_aws\iam.GetRolePolicy_1.json
botocore.hooks: DEBUG: Event after-call.iam.GetRolePolicy: calling handler <function json_decode_policies at 0x000000000407BC88>
boto3.resources.action: DEBUG: Response: {u'RoleName': u'ZappaLambdaExecution', u'PolicyDocument': {u'Version': u'2012-10-17', u'Statement': [{u'Action': [u'logs:*'], u'Resource': u'arn:aws:logs:*:*:*', u'Effect': u'Allow'}, {u'Action': [u'lambda:InvokeFunction'], u'Resource': [u'*'], u'Effect': u'Allow'}, {u'Action': [u'ec2:CreateNetworkInterface'], u'Resource': u'*', u'Effect': u'Allow'}, {u'Action': [u's3:*'], u'Resource': u'arn:aws:s3:::*', u'Effect': u'Allow'}, {u'Action': [u'sns:*'], u'Resource': u'arn:aws:sns:*:*:*', u'Effect': u'Allow'}, {u'Action': [u'sqs:*'], u'Resource': u'arn:aws:sqs:*:*:*', u'Effect': u'Allow'}, {u'Action': [u'dynamodb:*'], u'Resource': u'arn:aws:dynamodb:*:*:*', u'Effect': u'Allow'}]}, u'ResponseMetadata': {u'HTTPStatusCode': 200, u'RequestId': u'81a584b4-28f7-11e6-8296-0db40e88750e'}, u'PolicyName': u'zappa-permissions'}
--------------------- >> end captured logging << ---------------------

======================================================================
ERROR: test_cli_utility (tests.tests.TestZappa)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "F:\Programming\Repositories\Zappa\tests\tests.py", line 424, in test_cli_utility
    zappa_cli.create_package()
  File "F:\Programming\Repositories\Zappa\zappa\cli.py", line 933, in create_package
    django_py = ''.join(os.path.join([base, os.sep, 'ext', os.sep, 'django.py']))
  File "C:\Users\Mike\zappa-dev\lib\ntpath.py", line 65, in join
    result_drive, result_path = splitdrive(path)
  File "C:\Users\Mike\zappa-dev\lib\ntpath.py", line 116, in splitdrive
    normp = p.replace(altsep, sep)
AttributeError: 'list' object has no attribute 'replace'
-------------------- >> begin captured stdout << ---------------------
Packaging project as zip...

--------------------- >> end captured stdout << ----------------------
-------------------- >> begin captured logging << --------------------
botocore.credentials: DEBUG: Skipping environment variable credential check because profile name was explicitly set.
botocore.credentials: DEBUG: Looking for credentials via: env
botocore.credentials: DEBUG: Looking for credentials via: assume-role
botocore.credentials: DEBUG: Looking for credentials via: shared-credentials-file
botocore.credentials: INFO: Found credentials in shared credentials file: ~/.aws/credentials
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\endpoints.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\s3\2006-03-01\service-2.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\_retry.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x000000000558AB38>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\lambda\2015-03-31\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: lambda
botocore.hooks: DEBUG: Event creating-client-class.lambda: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting lambda timeout as (5, 300)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\events\2015-10-07\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: events
botocore.hooks: DEBUG: Event creating-client-class.events: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting events timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\apigateway\2015-07-09\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: apigateway
botocore.hooks: DEBUG: Event creating-client-class.apigateway: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting apigateway timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\logs\2014-03-28\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: logs
botocore.hooks: DEBUG: Event creating-client-class.logs: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting logs timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\iam\2010-05-08\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\iam\2010-05-08\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading iam:iam
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\s3\2006-03-01\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x000000000558AB38>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading s3:s3
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\cloudwatch\2010-08-01\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: cloudwatch
botocore.hooks: DEBUG: Event creating-client-class.cloudwatch: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting monitoring timeout as (60, 60)
--------------------- >> end captured logging << ---------------------

======================================================================
ERROR: test_upload_remove_s3 (tests.tests.TestZappa)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "F:\Programming\Repositories\Zappa\tests\utils.py", line 49, in wrapper
    return function(*args, **kwargs)
  File "F:\Programming\Repositories\Zappa\tests\tests.py", line 82, in test_upload_remove_s3
    zip_path = z.create_lambda_zip(minify=False)
  File "F:\Programming\Repositories\Zappa\zappa\zappa.py", line 396, in create_lambda_zip
    shutil.rmtree(temp_project_path)
  File "C:\Python27\Lib\shutil.py", line 247, in rmtree
    rmtree(fullname, ignore_errors, onerror)
  File "C:\Python27\Lib\shutil.py", line 247, in rmtree
    rmtree(fullname, ignore_errors, onerror)
  File "C:\Python27\Lib\shutil.py", line 247, in rmtree
    rmtree(fullname, ignore_errors, onerror)
  File "C:\Python27\Lib\shutil.py", line 252, in rmtree
    onerror(os.remove, fullname, sys.exc_info())
  File "C:\Python27\Lib\shutil.py", line 250, in rmtree
    os.remove(fullname)
WindowsError: [Error 5] Access is denied: 'c:\\users\\mike\\appdata\\local\\temp\\1471312385\\.git\\objects\\pack\\pack-3a5b3fea6a2c21f734c9883e691e78a442bb0d2c.idx'
-------------------- >> begin captured stdout << ---------------------
Packaging project as zip...

--------------------- >> end captured stdout << ----------------------
-------------------- >> begin captured logging << --------------------
placebo.pill: DEBUG: attaching to session: Session(region_name='us-east-1')
placebo.pill: DEBUG: datapath: F:\Programming\Repositories\Zappa\tests\placebo\TestZappa.test_upload_remove_s3
botocore.credentials: DEBUG: Skipping environment variable credential check because profile name was explicitly set.
botocore.credentials: DEBUG: Looking for credentials via: env
botocore.credentials: DEBUG: Looking for credentials via: assume-role
botocore.credentials: DEBUG: Looking for credentials via: shared-credentials-file
botocore.credentials: INFO: Found credentials in shared credentials file: ~/.aws/credentials
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\endpoints.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\s3\2006-03-01\service-2.json
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\_retry.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x0000000006734908>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\lambda\2015-03-31\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: lambda
botocore.hooks: DEBUG: Event creating-client-class.lambda: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.lambda: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting lambda timeout as (5, 300)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\events\2015-10-07\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: events
botocore.hooks: DEBUG: Event creating-client-class.events: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.events: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting events timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\apigateway\2015-07-09\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: apigateway
botocore.hooks: DEBUG: Event creating-client-class.apigateway: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.apigateway: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting apigateway timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\logs\2014-03-28\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: logs
botocore.hooks: DEBUG: Event creating-client-class.logs: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.logs: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting logs timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\iam\2010-05-08\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\iam\2010-05-08\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: iam
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.iam: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.regions: DEBUG: Using partition endpoint for iam, us-east-1: aws-global
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting iam timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading iam:iam
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\boto3\data\s3\2006-03-01\resources-1.json
botocore.client: DEBUG: Registering retry handlers for service: s3
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x00000000040476D8>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function _handler at 0x0000000006734908>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.s3: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting s3 timeout as (60, 60)
boto3.resources.factory: DEBUG: Loading s3:s3
botocore.loaders: DEBUG: Loading JSON file: C:\Users\Mike\zappa-dev\lib\site-packages\botocore\data\cloudwatch\2010-08-01\service-2.json
botocore.client: DEBUG: Registering retry handlers for service: cloudwatch
botocore.hooks: DEBUG: Event creating-client-class.cloudwatch: calling handler <function add_generate_presigned_url at 0x0000000004045208>
botocore.hooks: DEBUG: Event creating-client-class.cloudwatch: calling handler <bound method Pill._create_client of <placebo.pill.Pill object at 0x0000000006125FD0>>
placebo.pill: DEBUG: _create_client
botocore.args: DEBUG: The s3 config key is not a dictionary type, ignoring its value of: None
botocore.endpoint: DEBUG: Setting monitoring timeout as (60, 60)
--------------------- >> end captured logging << ---------------------

Name                  Stmts   Miss  Cover
-----------------------------------------
zappa.py                  0      0   100%
zappa\cli.py            400    149    63%
zappa\ext.py              0      0   100%
zappa\handler.py        149     20    87%
zappa\middleware.py      66      0   100%
zappa\util.py           110     10    91%
zappa\wsgi.py            53      0   100%
zappa\zappa.py          436    139    68%
-----------------------------------------
TOTAL                  1214    318    74%
----------------------------------------------------------------------
Ran 43 tests in 82.027s

FAILED (errors=3)

Here is my %PATH%:

Zappa>echo %PATH%
C:\Users\Mike\zappa-dev\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Docker\Docker\Resources\bin;C:\Program Files (x86)\Razer Chroma SDK\bin;C:\Program Files\Razer Chroma SDK\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows
\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Fi
les (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Skype\Phone\;C:\Program Files\PostgreSQL\pg95\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOW
S\System32\WindowsPowerShell\v1.0\;C:\Users\Mike\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Windows Kits\10\Windows Performance
 Toolkit\;C:\Program Files (x86)\Microsoft Emulator Manager\1.0\;C:\Program Files (x86)\nodejs\;C:\Program Files\Git\cmd;C:\Program Files\CMake\bin;C:\Go\bin;C:\Program Files\Mercurial\;C:\Program Files (x86)\PuTTY\;C:\Program Files (x86)\WinSCP\;C:\Users\Mike\
Anaconda2;C:\Users\Mike\Anaconda2\Scripts;C:\Users\Mike\Anaconda2\Library\bin;C:\Users\Mike\AppData\Roaming\npm;C:\Users\Mike\bin;C:\Users\Mike\AppData\Local\.meteor\;F:\Programming\Go\bin;C:\Users\Mike\AppData\Local\Microsoft\WindowsApps

Here is my env:

Zappa>SET
PROMPT=(zappa-dev) $P$G
USERDOMAIN_ROAMINGPROFILE=BOULDER
LOCALAPPDATA=C:\Users\Mike\AppData\Local
PROCESSOR_LEVEL=6
VIRTUAL_ENV=C:\Users\Mike\zappa-dev
VS140COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 14.0\Common7\Tools\
USERDOMAIN=BOULDER
FPS_BROWSER_APP_PROFILE_STRING=Internet Explorer
LOGONSERVER=\\BOULDER
SESSIONNAME=Console
ALLUSERSPROFILE=C:\ProgramData
PROCESSOR_ARCHITECTURE=AMD64
VS120COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\Tools\
PSModulePath=C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\;C:\Program Files\Intel\
SystemDrive=C:
APPDATA=C:\Users\Mike\AppData\Roaming
USERNAME=Mike
ProgramFiles(x86)=C:\Program Files (x86)
CommonProgramFiles=C:\Program Files\Common Files
Path=C:\Users\Mike\zappa-dev\Scripts;C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Docker\Docker\Resources\bin;C:\Program Files (x86)\Razer Chroma SDK\bin;C:\Program Files\Razer Chroma SDK\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Wi
ndows\System32\WindowsPowerShell\v1.0\;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Progr
am Files (x86)\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Skype\Phone\;C:\Program Files\PostgreSQL\pg95\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\W
INDOWS\System32\WindowsPowerShell\v1.0\;C:\Users\Mike\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Windows Kits\10\Windows Perfor
mance Toolkit\;C:\Program Files (x86)\Microsoft Emulator Manager\1.0\;C:\Program Files (x86)\nodejs\;C:\Program Files\Git\cmd;C:\Program Files\CMake\bin;C:\Go\bin;C:\Program Files\Mercurial\;C:\Program Files (x86)\PuTTY\;C:\Program Files (x86)\WinSCP\;C:\Users\
Mike\Anaconda2;C:\Users\Mike\Anaconda2\Scripts;C:\Users\Mike\Anaconda2\Library\bin;C:\Users\Mike\AppData\Roaming\npm;C:\Users\Mike\bin;C:\Users\Mike\AppData\Local\.meteor\;F:\Programming\Go\bin;C:\Users\Mike\AppData\Local\Microsoft\WindowsApps
FPS_BROWSER_USER_PROFILE_STRING=Default
PATHEXT=.COM;.EXE;.BAT;.CMD;.VBS;.VBE;.JS;.JSE;.WSF;.WSH;.MSC
OS=Windows_NT
COMPUTERNAME=BOULDER
GOPATH=F:\Programming\Go
PROCESSOR_REVISION=5e03
CommonProgramW6432=C:\Program Files\Common Files
GOROOT=C:\Go\
ComSpec=C:\WINDOWS\system32\cmd.exe
ProgramData=C:\ProgramData
ProgramW6432=C:\Program Files
GOBIN=F:\Programming\Go\bin
HOMEPATH=\Users\Mike
SystemRoot=C:\WINDOWS
TEMP=C:\Users\Mike\AppData\Local\Temp
HOMEDRIVE=C:
PROCESSOR_IDENTIFIER=Intel64 Family 6 Model 94 Stepping 3, GenuineIntel
USERPROFILE=C:\Users\Mike
TMP=C:\Users\Mike\AppData\Local\Temp
VS110COMNTOOLS=C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\Tools\
VSSDK140Install=C:\Program Files (x86)\Microsoft Visual Studio 14.0\VSSDK\
WAREHOUSE_API_KEY=d32gftp67yr957zce2w3y4kzyvk2hp5c
CommonProgramFiles(x86)=C:\Program Files (x86)\Common Files
ProgramFiles=C:\Program Files
PUBLIC=C:\Users\Public
NUMBER_OF_PROCESSORS=8
windir=C:\WINDOWS
_OLD_VIRTUAL_PATH=C:\ProgramData\Oracle\Java\javapath;C:\Program Files\Docker\Docker\Resources\bin;C:\Program Files (x86)\Razer Chroma SDK\bin;C:\Program Files\Razer Chroma SDK\bin;C:\Windows\system32;C:\Windows;C:\Windows\System32\Wbem;C:\Windows\System32\Wind
owsPowerShell\v1.0\;C:\Program Files (x86)\Intel\iCLS Client\;C:\Program Files\Intel\iCLS Client\;C:\Program Files (x86)\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program Files (x86)\Inte
l\Intel(R) Management Engine Components\IPT;C:\Program Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program Files (x86)\Skype\Phone\;C:\Program Files\PostgreSQL\pg95\bin;C:\WINDOWS\system32;C:\WINDOWS;C:\WINDOWS\System32\Wbem;C:\WINDOWS\System32\Win
dowsPowerShell\v1.0\;C:\Users\Mike\.dnx\bin;C:\Program Files\Microsoft DNX\Dnvm\;C:\Program Files\Microsoft SQL Server\120\Tools\Binn\;C:\Program Files\Microsoft SQL Server\130\Tools\Binn\;C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit\;C:\P
rogram Files (x86)\Microsoft Emulator Manager\1.0\;C:\Program Files (x86)\nodejs\;C:\Program Files\Git\cmd;C:\Program Files\CMake\bin;C:\Go\bin;C:\Program Files\Mercurial\;C:\Program Files (x86)\PuTTY\;C:\Program Files (x86)\WinSCP\;C:\Users\Mike\Anaconda2;C:\U
sers\Mike\Anaconda2\Scripts;C:\Users\Mike\Anaconda2\Library\bin;C:\Users\Mike\AppData\Roaming\npm;C:\Users\Mike\bin;C:\Users\Mike\AppData\Local\.meteor\;F:\Programming\Go\bin;C:\Users\Mike\AppData\Local\Microsoft\WindowsApps
_OLD_VIRTUAL_PROMPT=$P$G

Here is the version of zappa I'm working with:

Zappa>git log --oneline -n1
0202416 fix test

I'll arbitrarily work on it for a bit, see if I make some progress.

[Migrated] Schedule tasks with `cron()` format need to follow AWS rules

Originally from: Miserlou/Zappa#2209 by philsheard

Context

I was deploying an update via Zappa and got an error with the ScheduleExpression.

botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the PutRule operation: Parameter ScheduleExpression is not valid.

I hadn't made changes to the task schedule the Zappa settings so I guess Amazon might have changed the requirements in their backend.

Possible Fix

Some sort of check for validity in the settings would be good, but for now just a reference to these rules in the docs might help others.

Steps to Reproduce

  1. Set a schedule task using standard cron syntax:
"events": [{
  "function": "myapp.callable",
  "expression": "cron(15 * * * ? *)"
}]
  1. Deploy the branch
  2. See error

Your Environment

  • Zappa version used: 0.52.0
  • Operating System and Python version: OSX, Python 3.8.6

[Migrated] Failed to find libmysqlclient.so.18 when using slim_handler

Originally from: Miserlou/Zappa#2208 by NomeChomsky

This issue has been posted here and here but so far no fix.

When trying to use Postgres (psycopg2 and psyopg2-binary) it seems Zappa still tries to import libmysqlclient.so.18, which it cannot find, causing the program (in this case, Flask migration scripts) to hang, then exit.

This is a breaking issue for deployment It means that Flask projects connecting to Aurora using psycopg2 binaries cannot be deployed as this issue stops database migrations/interactions

The library should either be available automatically, or not included if its not needed. The fact the library cannot be found seems to create a breaking error.

Yes

Expected Behavior

Running a Flask-Migrate command works locally (with my local dependencies) works correctly. But when I call that command on the server via zappa, I get a breaking error which causes the program to hang. I expect to be able to run the command on the server the same way/in the same environment as I can run it locally. I'm expecting my command to work - it doesn't.

Actual Behavior

A 30 second command timeout, with this in the tail:
[1612507108211] Instancing..
[1612507108215] Failed to find library: libmysqlclient.so.18 ...right filename?

If I add "include": []" to settings.json, the problem persists. This fix was suggested in the above mentioned threads, but it has not worked.

The line in question is below - see how it does not allow for overriding of include: [] with nothing.

https://github.com/Miserlou/Zappa/blob/ba20c850eeca00edd6ea39fda1ab976cfee193ea/zappa/handler.py#L105

Possible Fix

There is a possible fix for the problem listed here

                include = self.stage_config.get('include', [])
                if include is None:
                    settings_s += 'INCLUDE=[]\n'
                elif len(include) >= 1:
                    settings_s += "INCLUDE=" + str(include) + '\n'

Steps to Reproduce

Create a flask app which uses Postgres db, connect to it with Flask-sqlalchemy, write a script which wraps Flask-Migrate commands so they're callable from zappa, and invoke either migrate() or update() - anything that uses the postgresdb.

Your Environment

  • Zappa version used:
    {
    "dev": {
    "app_function": "app.app",
    "aws_region": "us-east-1",
    "profile_name": "default",
    "project_name": "zask",
    "runtime": "python3.8",
    "s3_bucket": "zappa-mlrijq5z6",
    "environment_variables": {
    "POSTGRES_USER": "
    "POSTGRES_PASSWORD":
    "POSTGRES_URL":
    "POSTGRES_DB":
    "POSTGRES_PORT":
    "ENV":
    },
    "slim_handler": "true",
    "include": [
    ""
    ]
    }
    }

[Migrated] Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code

Originally from: Miserlou/Zappa#2178 by lightmagic1

Context

I'm having issues deploying my project using Zappa

Expected Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. Success!

Actual Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. I get the following error:

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.

Using Zappa tail:

"Failed to find library: libmysqlclient.so.18...right filename?"

Possible Fix

Steps to Reproduce

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: Ubuntu 20.04 / Python 3.7.7
  • The output of pip freeze:
alembic==1.0.8
aniso8601==6.0.0
argcomplete==1.9.3
astroid==2.2.5
autopep8==1.4.4
Babel==2.6.0
boto3==1.9.153
botocore==1.12.153
cachetools==3.1.1
certifi==2019.9.11
cfn-flip==1.2.0
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
emoji==0.5.4
Flask==1.0.2
Flask-Cors==3.0.7
Flask-Migrate==2.4.0
Flask-RESTful==0.3.7
Flask-SQLAlchemy==2.3.2
future==0.16.0
google-api-core==1.14.3
google-auth==1.7.0
google-cloud-speech==1.2.0
google-cloud-texttospeech==0.5.0
googleapis-common-protos==1.6.0
grpcio==1.25.0
hjson==3.0.1
idna==2.8
importlib-metadata==1.7.0
isort==4.3.16
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
Mako==1.0.8
MarkupSafe==1.1.1
mccabe==0.6.1
mysql-connector-python==8.0.21
pip-tools==5.3.1
placebo==0.9.0
protobuf==3.10.0
pyasn1==0.4.7
pyasn1-modules==0.2.7
pycodestyle==2.5.0
pylint==2.3.1
python-dateutil==2.6.1
python-editor==1.0.4
python-slugify==1.2.4
pytz==2019.3
PyYAML==5.1
requests==2.22.0
rsa==4.0
s3transfer==0.2.0
six==1.13.0
SQLAlchemy==1.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
typed-ast==1.3.1
Unidecode==1.0.23
urllib3==1.24
Werkzeug==0.15.1
wrapt==1.11.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
  • Link to your project (optional):
  • Your zappa_settings.json:
    "dev": {
        "app_function": "app.app",
        "aws_region": "us-east-1",
        "profile_name": "default",
        "project_name": "webhook",
        "runtime": "python3.7",
        "s3_bucket": "zappa-6132y6l",
        "slim_handler": true
    },

[Migrated] Zappa init error: Access is denied (windows)

Originally from: Miserlou/Zappa#2203 by merlinnaidoo

(venv) C:\Users\zappa>zappa init
Access is denied.
(cannot run this to run to get json)

(venv) C:\Users\zappa>zappa
Access is denied.

python -m zappa
C:\Users\merlinn\PycharmProjects\lambdazapptest\venv\Scripts\python.exe: No module named zappa.main; 'zappa' is a package and cannot be directly executed
(the above is understood but thought i should include to help if anything)

python version 3.8. windows 10 with full rights on project folder.

[Migrated] The "zappa update <stage>" command disconnects API Gateway and Lambda function.

Originally from: Miserlou/Zappa#2207 by hidekuma

When creating events for scheduling in zappa_settings.json, the API Gateway and Lambda function are disconnected.
The "zappa unschedule " command also disconnects the two services, is there a solution?

Context

Python 3.7, with pyenv, virtualenv

"events": [{
    "function": "app.cron",
    "expression": "rate(1 minute)"
}],

then

zappa update <stage>

API Gateway and Lambda functions are disconnected.

Expected Behavior

The connection between API Gateway and Lambda is not interrupted, and the registered event bridges must be applied.

Actual Behavior

API Gateway and Lambda functions are disconnected.

Possible Fix

Steps to Reproduce

  1. Add events to zappa_settings.json
  2. zappa update stage
  3. API Gateway and Lambda functions are disconnected.

Your Environment

  • Zappa version used: 0.52.0
  • Operating System and Python version: python 3.7.6
  • The output of pip freeze:
Package             Version
------------------- ----------
aniso8601           8.0.0
argcomplete         1.9.3
attrs               19.3.0
aws-xray-sdk        2.4.3
awscli              1.16.309
boto3               1.10.34
botocore            1.13.45
certifi             2019.11.28
cfn-flip            1.2.2
chalice             1.12.0
chardet             3.0.4
Click               7.0
colorama            0.4.1
docutils            0.15.2
durationpy          0.5
enum-compat         0.0.3
Flask               1.1.1
flask-restplus      0.13.0
future              0.16.0
greenlet            1.0.0
hjson               3.0.1
idna                2.8
importlib-metadata  1.2.0
itsdangerous        1.1.0
jedi                0.18.0
Jinja2              2.10.3
jmespath            0.9.3
jsonpickle          1.2
jsonschema          3.2.0
kappa               0.6.0
lambda-packages     0.20.0
MarkupSafe          1.1.1
more-itertools      8.0.2
msgpack             1.0.2
neovim              0.3.1
parso               0.8.1
pip                 21.0.1
pip-tools           5.5.0
placebo             0.9.0
pyasn1              0.4.8
pynvim              0.4.2
pyrsistent          0.15.6
python-dateutil     2.6.1
python-slugify      1.2.4
pytz                2019.3
PyYAML              5.1.2
requests            2.22.0
rsa                 3.4.2
s3transfer          0.2.1
setuptools          53.0.0
six                 1.13.0
toml                0.10.0
tqdm                4.19.1
troposphere         2.5.2
Unidecode           1.1.1
urllib3             1.25.7
Werkzeug            0.16.0
wheel               0.36.2
wrapt               1.11.2
wsgi-request-logger 0.4.6
zappa               0.52.0
zipp                0.6.0
  • Your zappa_settings.json:
{
  "dev": {
    "profile_name": "xxx",
    "lambda_description": "xxx",
    "apigateway_description": "xxx",
    "tags": {
      "service": "xxx",
      "stage": "xxx"
    },
    "s3_bucket": "xxx",
    "environment_variables": {},
    "keep_warm": false,
    "debug": true,
    "log_level": "DEBUG",
    "api_key": "xxx",
    "remote_env": "s3://xxx/env.json",
    "domain": "xxx.com",
    "certificate_arn": "arn:aws:acm:us-east-1:xxx:certificate/xxx",
    "cloudwatch_log_level": "OFF",
    "vpc_config": {},
    "xray_tracing": false,
    "events": [
      {
        "function": "app.cron",
        "expression": "rate(1 minute)"
      }
    ],
    "app_function": "app.app",
    "project_name": "xxx",
    "slim_handler": false,
    "num_retained_versions": 20,
    "runtime": "python3.7",
    "memory_size": 128,
    "timeout_seconds": 60,
    "exclude": [
      ".git",
      ".gitignore",
      ".aws/*",
      ".ceph/*",
      "__pycache__",
      "CHANGELOG.md",
      "requirements.txt",
      "*.ini",
      "policy/*",
      "*.gz",
      "*.rar",
      "*.pyc",
      "*.yml",
      "dockerfiles/*",
      "Dockerfile",
      "*.Dockerfile",
      "*.swp"
    ],
    "endpoint_configuration": [
      "EDGE"
    ],
    "touch": true,
    "manage_roles": false,
    "role_name": "xxx",
    "cors": true,
    "aws_region": "xxx",
    "base_path": "v1",
    "api_key_required": true,
    "cloudwatch_data_trace": true,
    "cloudwatch_metrics_enabled": true
  }
}

[Migrated] Bump boto3/botocore versions

Originally from: Miserlou/Zappa#2193 by ian-whitestone

Description

In support of #2188, this PR bumps the versions of boto3/botocore, so that we have access to the new Docker image functionality.

GitHub Issues

Related #2188

Testing

I created a new virtual env with the new dependencies and ran several Zappa workflows: deploy, update, status, and undeploy.

image

image

Any other tests you'd recommend running?

[Migrated] How to flush logs?

Originally from: Miserlou/Zappa#2199 by pcolmer

This is part question and part bug report.

During testing of my code, I'm generating a lot of logs. This makes tailing the logs longer and longer so I'd like to have a way of flushing the logs to keep things cleaner.

Furthermore, if the logs accrue somewhere, how do they get expired?

Context

The only way that occurs to me that I can flush the logs is to undeploy and redeploy but the logs continue to exist.

Expected Behavior

I would expect undeploying and redeploying to then result in tail displaying no logs.

Actual Behavior

I continue to see all the previous logs.

Steps to Reproduce

  1. Deploy an app
  2. Tail the logs, generating some log data if necessary
  3. Undeploy the app
  4. Redeploy the app
  5. Tail the logs again - the previous log data is there

Your Environment

  • Zappa version used: 0.52.0
  • Operating System and Python version: Ubuntu 20.04.1 LTS, Python 3.8.5
  • The output of pip freeze:
argcomplete==1.12.2
blinker==1.4
boto3==1.16.56
botocore==1.19.56
cachetools==4.2.0
certifi==2020.12.5
cfn-flip==1.2.3
chardet==4.0.0
click==7.1.2
durationpy==0.5
Flask==1.1.2
future==0.18.2
google-api-core==1.25.0
google-api-python-client==1.12.8
google-auth==1.24.0
google-auth-httplib2==0.0.4
googleapis-common-protos==1.52.0
hjson==3.0.2
httplib2==0.18.1
idna==2.10
itsdangerous==1.1.0
Jinja2==2.11.2
jmespath==0.10.0
JSON-minify==0.3.0
kappa==0.6.0
ldap3==2.8.1
MarkupSafe==1.1.1
pip-tools==5.5.0
placebo==0.9.0
protobuf==3.14.0
pyasn1==0.4.8
pyasn1-modules==0.2.8
python-dateutil==2.8.1
python-slugify==4.0.1
pytz==2020.5
PyYAML==5.3.1
requests==2.25.1
rsa==4.7
s3transfer==0.3.4
sentry-sdk==0.19.5
six==1.15.0
text-unidecode==1.3
toml==0.10.2
tqdm==4.56.0
troposphere==2.6.3
Unidecode==1.1.2
uritemplate==3.0.1
urllib3==1.26.2
vault-auth @ git+https://github.com/linaro-its/vault_auth.git@5bb93a5a0e04d55a8e926d36c37d3b908ead2f55
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.52.0
  • Your zappa_settings.json:
{
    "dev": {
        "app_function": "app.APP",
        "aws_region": "us-east-1",
        "profile_name": "DevsAdminAccess",
        "project_name": "linaro-sd-webhook",
        "runtime": "python3.8",
        "s3_bucket": "zappa-sd-webhook",
        "keep_warm": false,
        "exclude": [
            ".gitignore",
            ".vscode",
            "boto3*",
            "botocore*",
            "build_cfs.py",
            "copy-repos.sh",
            "Pipfile*",
            "README.md"
        ],
        "extra_permissions": [
            {
                "Effect": "Allow",
                "Action": [
                    "sts:AssumeRole"
                ],
                "Resource": [
                    "arn:aws:iam::621503700583:role/vault_sd_automation"
                ]
            }
        ],
        "log_level": "INFO"
    }
}

[Migrated] Zappa is not invoking the right Virtual env correctly while deploy

Originally from: Miserlou/Zappa#2206 by veris-neerajdhiman

I am trying to deploy a Django app with Zappa using. I have created virtualenv using pyenv.

Context

Following commands confirms the correct virtualenv

▶ pyenv which zappa
/Users/****/.pyenv/versions/zappa/bin/zappa

▶ pyenv which python
/Users/****/.pyenv/versions/zappa/bin/python

But when I am trying to deploy the application using zappa deploy dev following error is thrown

▶ zappa deploy dev
(pip 18.1 (/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages), Requirement.parse('pip>=20.1'), {'pip-tools'})
Calling deploy for stage dev..
Oh no! An error occurred! :(

==============

Traceback (most recent call last):
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 2778, in handle
    sys.exit(cli.handle())
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 512, in handle
    self.dispatch_command(self.command, stage)
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 549, in dispatch_command
    self.deploy(self.vargs['zip'])
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 723, in deploy
    self.create_package()
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/cli.py", line 2264, in create_package
    disable_progress=self.disable_progress
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/core.py", line 627, in create_lambda_zip
    copytree(site_packages, temp_package_path, metadata=False, symlinks=False, ignore=shutil.ignore_patterns(*excludes))
  File "/Users/****/.pyenv/versions/3.6.9/envs/zappa/lib/python3.6/site-packages/zappa/utilities.py", line 54, in copytree
    lst = os.listdir(src)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/****/mydir/zappa/env/lib/python3.6/site-packages'

==============

Expected Behavior

  • Zappa Should fetch packages from pyenv virtualenv

Actual Behavior

  • You can see the line at which error is thrown is different where virtualenv is installed. I don't know why Zappa deploy is looking for the site-packages here.

Possible Fix

Steps to Reproduce

  1. Create Virtual env from pyenv 3.6.9
  2. Install zappa and Django
  3. deploy zappa application

Your Environment

  • Zappa version used: 0.52.0
  • Operating System and Python version: Mac and python 3.6.9 (installed from pyenv)
  • The output of pip freeze:
argcomplete==1.12.2
asgiref==3.3.1
boto3==1.17.0
botocore==1.20.0
certifi==2020.12.5
cfn-flip==1.2.3
chardet==4.0.0
click==7.1.2
Django==3.1.6
durationpy==0.5
future==0.18.2
hjson==3.0.2
idna==2.10
importlib-metadata==3.4.0
jmespath==0.10.0
kappa==0.6.0
pip-tools==5.5.0
placebo==0.9.0
python-dateutil==2.8.1
python-slugify==4.0.1
pytz==2021.1
PyYAML==5.4.1
requests==2.25.1
s3transfer==0.3.4
six==1.15.0
sqlparse==0.4.1
text-unidecode==1.3
toml==0.10.2
tqdm==4.56.0
troposphere==2.6.3
typing-extensions==3.7.4.3
urllib3==1.26.3
Werkzeug==0.16.1
wsgi-request-logger==0.4.6
zappa==0.52.0
zipp==3.4.0
  • Link to your project (optional):
  • Your zappa_settings.json:
▶ brew cleanup
{
    "dev": {
        "django_settings": "myproject.settings",
        "profile_name": "zappa",
        "project_name": "zappa",
        "runtime": "python3.6",
        "s3_bucket": "nd-zappa",
        "aws_region": "ap-south-1"
    }
}

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code

Context

I'm having issues deploying my project using Zappa

Expected Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. Success!

Actual Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. I get the following error:

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.

Using Zappa tail:

"Failed to find library: libmysqlclient.so.18...right filename?"

Possible Fix

Steps to Reproduce

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: Ubuntu 20.04 / Python 3.7.7
  • The output of pip freeze:
alembic==1.0.8
aniso8601==6.0.0
argcomplete==1.9.3
astroid==2.2.5
autopep8==1.4.4
Babel==2.6.0
boto3==1.9.153
botocore==1.12.153
cachetools==3.1.1
certifi==2019.9.11
cfn-flip==1.2.0
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
emoji==0.5.4
Flask==1.0.2
Flask-Cors==3.0.7
Flask-Migrate==2.4.0
Flask-RESTful==0.3.7
Flask-SQLAlchemy==2.3.2
future==0.16.0
google-api-core==1.14.3
google-auth==1.7.0
google-cloud-speech==1.2.0
google-cloud-texttospeech==0.5.0
googleapis-common-protos==1.6.0
grpcio==1.25.0
hjson==3.0.1
idna==2.8
importlib-metadata==1.7.0
isort==4.3.16
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
Mako==1.0.8
MarkupSafe==1.1.1
mccabe==0.6.1
mysql-connector-python==8.0.21
pip-tools==5.3.1
placebo==0.9.0
protobuf==3.10.0
pyasn1==0.4.7
pyasn1-modules==0.2.7
pycodestyle==2.5.0
pylint==2.3.1
python-dateutil==2.6.1
python-editor==1.0.4
python-slugify==1.2.4
pytz==2019.3
PyYAML==5.1
requests==2.22.0
rsa==4.0
s3transfer==0.2.0
six==1.13.0
SQLAlchemy==1.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
typed-ast==1.3.1
Unidecode==1.0.23
urllib3==1.24
Werkzeug==0.15.1
wrapt==1.11.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
  • Link to your project (optional):
  • Your zappa_settings.json:
    "dev": {
        "app_function": "app.app",
        "aws_region": "us-east-1",
        "profile_name": "default",
        "project_name": "webhook",
        "runtime": "python3.7",
        "s3_bucket": "zappa-6132y6l",
        "slim_handler": true
    },

[Migrated] A possibility to apply for OpenSSF help as Zappa is included in Top 200 python critical projects

Originally from: Miserlou/Zappa#2187 by ambientlight

@jneves:

Not sure if you are a last remaining active maintainer of Zappa. As part of the community still rely on Zappa, it is included in OpenSSF top 200 critical python projects: python_top_200.csv

As mentioned in google's finding-critical-open-source-projects maybe you can reach out to OpenSSF working group to see if there is any help in moving zappa forward, as there seems to be no momentum is either #1 or Miserlou/Zappa#1610

Or alternatively, community members who still rely on zappa here maybe can coordinate in any issue like this, decide on authoritative fork like https://github.com/zappa/Zappa/ , sketch a new roadmap, and reach out to OpenSSF.

Any thoughts?

[Migrated] fails to pip install zappa

Originally from: Miserlou/Zappa#2191 by AndroLee

"pip install zappa" returns an error

Context

Collecting zappa
Using cached zappa-0.52.0-py3-none-any.whl (114 kB)
Requirement already satisfied: wheel in c:\users\leeandr\pycharmprojects\mychatbot2\venv\lib\site-packages (from zappa) (0.36.1)
Requirement already satisfied: pip>=9.0.1 in c:\users\leeandr\pycharmprojects\mychatbot2\venv\lib\site-packages (from zappa) (20.3.3)
Collecting kappa==0.6.0
Using cached kappa-0.6.0.tar.gz (29 kB)
ERROR: Command errored out with exit status 1:
command: 'c:\users\leeandr\pycharmprojects\mychatbot2\venv\scripts\python.exe' -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py'"'"'; file='"'"'C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base 'C:\Users\leeandr\AppData\Local\Temp\pip-pip-egg-info-s73bwltt'
cwd: C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947
Complete output (7 lines):
Traceback (most recent call last):
File "", line 1, in
File "C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py", line 54, in
run_setup()
File "C:\Users\leeandr\AppData\Local\Temp\pip-install-qbrjlaxm\kappa_53d6de6b432849b3bdabd76fb0731947\setup.py", line 22, in run_setup
long_description=open_file('README.rst').read(),
UnicodeDecodeError: 'cp950' codec can't decode byte 0xe2 in position 2339: illegal multibyte sequence
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.

Expected Behavior

pip install is successful.

Actual Behavior

It returns a failure as above.

Possible Fix

NA

Steps to Reproduce

Not sure if a Traditional Chinese version of Windows matters but it is what I have.

Your Environment

  • Zappa version used: installing latest one
  • Operating System and Python version: 3.9
  • The output of pip freeze: none
  • Link to your project (optional):
  • Your zappa_settings.json:

[Migrated] Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code

Originally from: Miserlou/Zappa#2178 by lightmagic1

Context

I'm having issues deploying my project using Zappa

Expected Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. Success!

Actual Behavior

  1. Cloned my project
  2. python -m venv venv
  3. pip install -r requirements.txt --no-cache-dir
  4. zappa update dev
  5. I get the following error:

Error: Warning! Status check on the deployed lambda failed. A GET request to '/' yielded a 500 response code.

Using Zappa tail:

"Failed to find library: libmysqlclient.so.18...right filename?"

Possible Fix

Steps to Reproduce

Your Environment

  • Zappa version used: 0.51.0
  • Operating System and Python version: Ubuntu 20.04 / Python 3.7.7
  • The output of pip freeze:
alembic==1.0.8
aniso8601==6.0.0
argcomplete==1.9.3
astroid==2.2.5
autopep8==1.4.4
Babel==2.6.0
boto3==1.9.153
botocore==1.12.153
cachetools==3.1.1
certifi==2019.9.11
cfn-flip==1.2.0
chardet==3.0.4
Click==7.0
docutils==0.14
durationpy==0.5
emoji==0.5.4
Flask==1.0.2
Flask-Cors==3.0.7
Flask-Migrate==2.4.0
Flask-RESTful==0.3.7
Flask-SQLAlchemy==2.3.2
future==0.16.0
google-api-core==1.14.3
google-auth==1.7.0
google-cloud-speech==1.2.0
google-cloud-texttospeech==0.5.0
googleapis-common-protos==1.6.0
grpcio==1.25.0
hjson==3.0.1
idna==2.8
importlib-metadata==1.7.0
isort==4.3.16
itsdangerous==1.1.0
Jinja2==2.10
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
lazy-object-proxy==1.3.1
Mako==1.0.8
MarkupSafe==1.1.1
mccabe==0.6.1
mysql-connector-python==8.0.21
pip-tools==5.3.1
placebo==0.9.0
protobuf==3.10.0
pyasn1==0.4.7
pyasn1-modules==0.2.7
pycodestyle==2.5.0
pylint==2.3.1
python-dateutil==2.6.1
python-editor==1.0.4
python-slugify==1.2.4
pytz==2019.3
PyYAML==5.1
requests==2.22.0
rsa==4.0
s3transfer==0.2.0
six==1.13.0
SQLAlchemy==1.3.1
text-unidecode==1.3
toml==0.10.0
tqdm==4.19.1
troposphere==2.4.7
typed-ast==1.3.1
Unidecode==1.0.23
urllib3==1.24
Werkzeug==0.15.1
wrapt==1.11.1
wsgi-request-logger==0.4.6
zappa==0.51.0
zipp==3.1.0
  • Link to your project (optional):
  • Your zappa_settings.json:
    "dev": {
        "app_function": "app.app",
        "aws_region": "us-east-1",
        "profile_name": "default",
        "project_name": "webhook",
        "runtime": "python3.7",
        "s3_bucket": "zappa-6132y6l",
        "slim_handler": true
    },

[Migrated] Update/deploy a lambda using a Docker container image

Originally from: Miserlou/Zappa#2192 by ian-whitestone

Background

As discussed in #2188, AWS recently announced container image support for AWS Lambda. This means you can now package and deploy lambda functions as container images, instead of using zip files. The container image based approach will solve a lot of headaches caused by the zip file approach, particularly with file sizes (container images can be up to 10GB) and the dependency issues we all know & love.

What does this PR do?

  1. Make the zappa update and zappa deploy commands accept a new docker_image_uri parameter. docker_image_uri must point to an image in Elastic Container Registry (ECR) that complies with these guidelines. Instructions on how to configure this docker image will be provided in an accompanying blog post.
  2. Add a new save-python-settings-file CLI command, which will be used to generate & save the zappa_settings.py file that must be included in your docker image

Reproducible example

See this repo which contains an example Dockerfile that can be used for testing.

Discussion points for reviewers

  • Test cases I added
  • Accompanying blog post we can point users to for a more in-depth explanation/overview
  • Any other key zappa workflows that should be updated and/or tested in conjunction with this? I've tested the zappa update, zappa deploy, zappa rollback` commands for both creating from a zip file (traditional method) and from a docker image (new method)
  • If you would like this PR to be a bit smaller, I can pull out some of the stuff (new save-python-settings-file command, wait until lambda is ready function, etc.) into separate PRs

Future Work & Next Steps

  • Add functionality to zappa to create new docker images. We've lost some of the zappa magic since you can no longer do a "1-line create & deploy" with zappa deploy, you need to first create your own docker and then deploy.
  • Implement zappa rollback, see this comment for details

[Migrated] Conda Virtual Environments?

Originally from: Miserlou/Zappa#167 by Erstwild

Great project, btw! I see myself making liberal use of this.

I just wanted to check and see if there are any issues with using conda virtual environments? I successfully create a conda virtual environment (just flask and its dependencies), activate the new environment, and then maneuver to my project directory and then "zappa deploy project" where I have the zappa_settings.json configured correctly the best I can tell.

conda create -n enviro python=2.7 flask
source activate enviro
zappa deploy project

but then...

Packaging project as zip...
Zappa requires an active virtual environment.

Also, unrelated, when specifying a domain should it be the same as the hostname in app.py? For example "www.abc.com". I have my domain up on Route 53.

[Migrated] An error occurred (UnrecognizedClientException) when calling the PutIntegration operation: The security token included in the request is invalid

Originally from: Miserlou/Zappa#198 by avengerpenguin

I get this error randomly part-way through the "Creating API Gateway routes (this only happens once)" stage.

I'd tried a few things and it seems to fail somewhere between 8% and 11% through, which suggests to me that it's not doing the same operation. In fact, on one run it failed on "PutMethodResponse" and not "PutIntegration"

The only thing I can conclude is there is some kind of rate-limiting I'm hitting that Zappa is seeing as key issue, but key pair worked fine for the first 8-11% of the operation.

I can't see a way to "resume" the API Gateway part as it's part of the first-time "zappa deploy" command. That would give me a workaround here at least.

[Migrated] Support for deleting old functions

Originally from: Miserlou/Zappa#2184 by Alex-Mackie

Users are limited to 75GB of Lambda code storage. With multiple daily deployments the 75GB limit can be reached quickly and deleting old deployments via the web console is repetitious.

Would this cleanup belong in Lambda? It could be exposed to users as via a setting like max number of deployed functions to keep.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.