Coder Social home page Coder Social logo

serverless / components Goto Github PK

View Code? Open in Web Editor NEW
2.3K 54.0 181.0 20.13 MB

The Serverless Framework's new infrastructure provisioning technology — Build, compose, & deploy serverless apps in seconds...

Home Page: https://www.serverless.com

License: Apache License 2.0

JavaScript 100.00%
serverless serverless-framework faas aws lambda gcf aws-lambda

components's Introduction

Serverless Framework AWS Lambda AWS DynamoDB AWS API Gateway


Website  •  Documentation  •  X / Twitter  •  Community Slack  •  Forum


The Serverless Framework – Makes it easy to use AWS Lambda and other managed cloud services to build applications that auto-scale, cost nothing when idle, and overall result in radically low maintenance.

The Serverless Framework is a command-line tool with approachable YAML syntax to deploy both your code and cloud infrastructure needed to make tons of serverless application use-cases, like APIs, front-ends, data pipelines and scheduled tasks. It's a multi-language framework that supports Node.js, Typescript, Python, Go, Java, and more. It's also completely extensible via over 1,000 plugins which add more serverless use-cases and workflows to the Framework.

Actively maintained by Serverless Inc.


Serverless Framework - V.4


June 12th, 2024 – We've released Serverless Framework V.4 GA after testing the V.4 Alpha and Beta since early 2024. If you are upgrading to V.4, see our Upgrading to Serverless Framework V4 Documentation. If you need to access documentation for Serverless Framework V.3, you can find it here.

New Features In V.4

Here's a list of everything that's new in V.4, so far:

  • Native Typescript Support: You can now use .ts handlers in your AWS Lambda functions in serverless.yml and have them build automatically upon deploy. ESBuild is now included in the Framework which makes this possible. More info here.
  • New Dev Mode: Run serverless dev to have events from your live architecture routed to your local code, enabling you to make fast changes without deployment. More info here.
  • New Stages Property: Easily organize stage-specific config via stages and set default config to fallback to.
  • New Terraform & Vault Integrations: Pull state outputs from several Terraform state storage solutions, and secrets from Vault. Terraform Docs Vault Docs
  • Support Command: Send support requests to our team directly from the CLI, which auto-include contextual info which you can review before sending.
  • Debug Summary for AI: When you run into a bug, you can run "serverless support --ai" to generate a concise report detailing your last bug with all necessary context, optimized for pasting into AI tools such as ChatGPT.
  • New AWS Lambda Runtimes: "python3.12", "dotnet8", and "java21".
  • Advanced Logging Controls for AWS Lambda: Capture Logs in JSON, increased log granularity, and setting a custom Log Group. Here is the AWS article. Here is the YAML implementation
  • AWS SSO: Environment variables, especially ones set by AWS SSO, are prioritized. The Framework and Dashboard no longer interfere with these.
  • Automatic Updates: These happen by default now. Though, you will be able to control the level of updates you're open to.
  • Improved Onboarding & Set-Up: The serverless command has been re-written to be more helpful when setting up a new or existing project.
  • Updated Custom Resource Handlers: All custom resource handlers now use nodejs20.x.
  • Deprecation Of Non-AWS Providers: Deprecation of other cloud providers, in favor of handling this better in our upcoming Serverless Framework "Extensions".

Breaking Changes

We're seeking to avoid breaking changes for the "aws" Provider. However, there are a few large things that are changing to be aware of:

  • The V.4 License is changing. See the section below for more information on this.
  • Authentication is required within the CLI.
  • Non-AWS Providers have been deprecated. We will be introducing new ways in V.4 to use other cloud infrastructure vendors.

If you stumble upon additional breaking changes, please create an issue. To learn more about what's different and potential breaking changes, please see our Upgrading to Serverless Framework V4 Documentation.

License Changes in V.4

Please note, the structure and licensing of the V.4 repository differ from the V.4 npm module. The npm module contains some proprietary licensed software, as V.4 transitions to a common SaaS product, as previously announced. The original Serverless Framework source code and more will continue to remain MIT license software, the repository will soon be restructured to clearly distinguish between proprietary and open-source components.


Contents


Features

  • Build More, Manage Less: Innovate faster by spending less time on infrastructure management.
  • Maximum Versatility: Tackle diverse serverless use cases, from APIs and scheduled tasks to web sockets and data pipelines.
  • Automated Deployment: Streamline development with code and infrastructure deployment handled together.
  • Local Development: Route events from AWS to your local AWS Lambda code to develop faster without having to deploy every change.
  • Ease of Use: Deploy complex applications without deep cloud infrastructure expertise, thanks to simple YAML configuration.
  • Language Agnostic: Build in your preferred language – Node.js, Python, Java, Go, C#, Ruby, Swift, Kotlin, PHP, Scala, or F#.
  • Complete Lifecycle Management: Develop, deploy, monitor, update, and troubleshoot serverless applications with ease.
  • Scalable Organization: Structure large projects and teams efficiently by breaking down large apps into Services to work on individually or together via Serverless Compose.
  • Effortless Environments: Seamlessly manage development, staging, and production environments.
  • Customization Ready: Extend and modify the Framework's functionality with a rich plugin ecosystem.
  • Vibrant Community: Get support and connect with a passionate community of Serverless developers.

Quick Start

Here's how to install the Serverless Framework, set up a project and deploy it to Amazon Web Services on serverless infrastructure like AWS Lambda, AWS DynamoDB, AWS S3 and more.


Install the Serverless Framework via NPM

First, you must have the Node.js runtime installed, version 18.20.3 or greater, then you can install the Serverless Framework via NPM.

Open your CLI and run the command below to install the Serverless Framework globally.

npm i serverless -g

Run serverless to verify your installation is working, and show the current version.


Update Serverless Framework

As of version 4, the Serverless Framework automatically updates itself and performs a check to do so every 24 hours.

You can force an update by running this command:

serverless update

Or, you can set this environment variable:

SERVERLESS_FRAMEWORK_FORCE_UPDATE=true

The serverless Command

The Serverless Framework ships with a serverless command that walks you through getting a project created and deployed onto AWS. It helps with downloading a Template, setting up AWS Credentials, setting up the Serverless Framework Dashboard, and more, while explaining each concept along the way.

This guide will also walk you through getting started with the Serverless Framework, but please note, simply typing the serverless command may be the superior experience.

serverless

Create A Service

The primary concept for a project in the Serverless Framework is known as a "Service", and its declared by a serverless.yml file, which contains simplified syntax for deploying cloud infrastructure, such as AWS Lambda functions, infrastructure that triggers those functions with events, and additional infrastructure your AWS Lambda functions may need for various use-cases (e.g. AWS DynamoDB database tables, AWS S3 storage buckets, AWS API Gateways for recieving HTTP requests and forwarding them to AWS Lambda).

A Service can either be an entire application, logic for a specific domain (e.g. "blog", "users", "products"), or a microservice handling one task. You decide how to organize your project. Generally, we recommend starting with a monolithic approach to everything to reduce complexity, until breaking up logic is absolutely necessary.

To create and fully set up a Serverless Framework Service, use the serverless command, which offers an interactive set-up workflow.

serverless

This will show you several Templates. Choose one that fits the language and use-case you want.

Serverless ϟ Framework
Welcome to Serverless Framework V.4

Create a new project by selecting a Template to generate scaffolding for a specific use-case.

? Select A Template: …
❯ AWS / Node.js / Starter
  AWS / Node.js / HTTP API
  AWS / Node.js / Scheduled Task
  AWS / Node.js / SQS Worker
  AWS / Node.js / Express API
  AWS / Node.js / Express API with DynamoDB
  AWS / Python / Starter
  AWS / Python / HTTP API
  AWS / Python / Scheduled Task
  AWS / Python / SQS Worker
  AWS / Python / Flask API
  AWS / Python / Flask API with DynamoDB
  (Scroll for more)

After selecting a Service Template, its files will be downloaded and you will have the opportunity to give your Service a name.

? Name Your Service: ›

Please use only lowercase letters, numbers and hyphens. Also, keep Service names short, since they are added into the name of each cloud resource the Serverless Framework creates, and some cloud resources have character length restrictions in their names.

Learn more about Services and more in the Core Concepts documentation.


Signing In

As of Serverless Framework V.4, if you are using the serverless command to set up a Service, it will eventually ask you to log in.

If you need to log in outside of that, run serverless login.

Logging in will redirect you to the Serverless Framework Dashboard within your browser. After registering or logging in, go back to your CLI and you will be signed in.

Please note, you can get up and running with the Serverless Framework CLI and Dashboard for free, and the CLI will always be free for small orgs and indiehackers. For more information on pricing, check out our pricing page.


Creating An App

The "App" concept is a parent container for one or many "Services" which you can optionally set via the app property in your serverless.yml. Setting an app also enables Serverless Framework Dashboard features for that Service, like tracking your Services and their deployments in Serverless Framework Dashboard, enabling sharing outputs between them, sharing secrets between them, and enabling metrics, traces and logs.

If you are using the serverless onboarding command, it will help you set up an app and add it to your Service. You can use the serverless command to create an App on an existing Service as well, or create an App in the Dashboard.

❯ Create A New App
  ecommerce
  blog
  acmeinc
  Skip Adding An App

The app can also be set manually in serverless.yml via the app property:

service: my-service
app: my-app

If you don't want to use the Serverless Framework Dashboard's features, simply don't add an app property. Apps are not required.


Setting Up AWS Credentials

To deploy cloud infrastructure to AWS, you must give the Serverless Framework access to your AWS credentials.

Running the Serverless Framework's serverless command in a new or existing Service will help identify if AWS credentials have been set correctly or if they are expired, or help you set them up from scratch.

No valid AWS Credentials were found in your environment variables or on your machine. Serverless Framework needs these to access your AWS account and deploy resources to it. Choose an option below to set up AWS Credentials.

❯ Create AWS IAM Role (Easy & Recommended)
  Save AWS Credentials in a Local Profile
  Skip & Set Later (AWS SSO, ENV Vars)

We recommend creating an AWS IAM Role that's stored in the Serverless Framework Dashboard. We'll be supporting a lot of Provider Credentials in the near future, and the Dashboard is a great place to keep these centralized across your team, helping you stay organized, and securely eliminating the need to keep credentials on the machines of your teammates.

If you are using AWS SSO, we recommend simply pasting your temporary SSO credentials within the terminal as environment variables.

To learn more about setting up your AWS Credentials, read this guide.


Deploy A Service

After you've used the serverless command to set up everything, it's time to deploy your Service to AWS.

Make sure your terminal session is within the directory that contains your serverless.yml file. If you just created a Service, don't forget to cd into it.

cd [your-new-service-name]

Deploying will create/update cloud infrastructure and code on AWS, all at the same time.

Run the deploy command:

serverless deploy

More details on deploying can be found here.


Developing

Many Serverless Framework and serverless developers generally choose to develop on the cloud, since it matches reality (i.e. your production environment), and emulating Lambda and other infrastructure dependencies locally can be complex.

In Serverless Framework V.4, we've created a hybrid approach to development, to help developers develop rapidly with the accuracy of the real cloud environment. This is the new dev command:

serverless dev

When you run this command, the following happens...

An AWS Cloudformation deployment will happen to slightly modify all of the AWS Lambda functions within your Service so that they include a lightweight wrapper.

Once this AWS Cloudformation deployment has completed, your live AWS Lambda functions within your Service will still be able to receive events and be invoked within AWS.

However, the events will be securely and instantly proxied down to your machine, and the code on your machine which will be run, rather than the code within your live AWS Lambda functions.

This allows you to make changes to your code, without having to deploy or recreate every aspect of your architecture locally, allowing you to develop rapidly.

Logs from your local code will also be shown within your terminal dev session.

Once your code has finished, the response from your local code will be forwarded back up to your live AWS Lambda functions, and they will return the response—just like a normal AWS Lambda function in the cloud would.

Please note, dev is only designed for development or personal stages/environments and should not be run in production or any stage where a high volume of events are being processed.

Once you are finished with your dev session, you MUST re-deploy, using serverless deploy to push your recent local changes back to your live AWS Lambda functions—or your AWS Lambda functions will fail(!)

More details on dev mode can be found here.


Invoking

To invoke your AWS Lambda function on the cloud, you can find URLs for your functions w/ API endpoints in the serverless deploy output, or retrieve them via serverless info. If your functions do not have API endpoints, you can use the invoke command, like this:

sls invoke -f hello

# Invoke and display logs:
serverless invoke -f hello --log

More details on the invoke command can be found here.


Deploy Functions

To deploy code changes quickly, you can skip the serverless deploy command which is much slower since it triggers a full AWS CloudFormation update, and deploy only code and configuration changes to a specific AWS Lambda function.

To deploy code and configuration changes to individual AWS Lambda functions in seconds, use the deploy function command, with -f [function name in serverless.yml] set to the function you want to deploy.

serverless deploy function -f my-api

More details on the deploy function command can be found here.


Streaming Logs

You can use Serverless Framework to stream logs from AWS Cloudwatch directly to your terminal. Use the sls logs command in a separate terminal window:

sls logs -f [Function name in serverless.yml] -t

Target a specific function via the -f option and enable tailing (i.e. streaming) via the -t option.


Full Local Development

Many Serverless Framework users choose to emulate their entire serverless architecture locally. Please note, emulating AWS Lambda and other cloud services is never accurate and the process can be complex, especially as your project and teammates grow. As of V.4, we highly recommend using the new dev mode with personal stages.

If you do choose to develop locally, we recommend the following workflow...

Use the invoke local command to invoke your function locally:

sls invoke local -f my-api

You can also pass data to this local invocation via a variety of ways. Here's one of them:

sls invoke local --function functionName --data '{"a":"bar"}'

More details on the invoke local command can be found here

Serverless Framework also has a great plugin that allows you to run a server locally and emulate AWS API Gateway. This is the serverless-offline command.

More details on the serverless-offline plugins command can be found here.


Use Plugins

A big benefit of Serverless Framework is within its Plugin ecosystem.

Plugins extend or overwrite the Serverless Framework, giving it new use-cases or capabilites, and there are hundreds of them.

Some of the most common Plugins are:


Remove Your Service

If you want to delete your service, run remove. This will delete all the AWS resources created by your project and ensure that you don't incur any unexpected charges. It will also remove the service from Serverless Dashboard.

serverless remove

More details on the remove command can be found here.


Composing Services

Serverless Framework Compose allows you to work with multiple Serverless Framework Services at once, and do the following...

  • Deploy multiple services in parallel
  • Deploy services in a specific order
  • Share outputs from one service to another
  • Run commands across multiple services

Here is what a project structure might look like:

my-app/
  service-a/
    src/
      ...
    serverless.yml
  service-b/
    src/
      ...
    serverless.yml

Using Serverless Framework Compose requires a serverless-compose.yml file. In it, you specify which Services you wish to deploy. You can also share data from one Service to another, which also creates a deployment order.

# serverless-compose.yml

services:
  service-a:
    path: service-a

  service-b:
    path: service-b
    params:
      queueUrl: ${service-a.queueUrl}

Currently, outputs to be inherited by another Service must be AWS Cloudformation Outputs.

# service-a/serverless.yml

# ...

resources:
  Resources:
    MyQueue:
      Type: AWS::SQS::Queue
      # ...
  Outputs:
    queueUrl:
      Value: !Ref MyQueue

The value will be passed to service-b as a parameter named queueUrl. Parameters can be referenced in Serverless Framework configuration via the ${param:xxx} syntax:

# service-b/serverless.yml

provider:
  ...
  environment:
    # Here we inject the queue URL as a Lambda environment variable
    SERVICE_A_QUEUE_URL: ${param:queueUrl}

More details on Serverless Framework Compose can be found here.


Support Command

In Serverless Framework V.4, we've introduced the serverless support command, a standout feature that lets you generate issue reports, or directly connect with our support team. It automatically includes relevant context and omits sensitive details like secrets and account information, which you can check before submission. This streamlined process ensures your issues are quickly and securely addressed.

To use this feature, after an error or any command, run:

sls support

After each command, whether it succeeded or not, the context is saved within your current working directory in the .serverless folder.

To open a new support ticket, run the sls support command and select Get priority support.... Optionally you'll be able to review and edit the generated report. Opening support tickets is only available to users who sign up for a Subscription.

You can also generate reports without submitting a new support ticket. This is useful for sharing context with others, opening Github issues, or using it with an AI prompt like ChatGPT. To do this, run the sls support command and select Create a summary report..., or Create a comprehensive report... You can skip the prompt by running sls support --summary or sls support --all. This is especially useful for capturing the report into the clipboard (e.g. sls support --summary | pbcopy).


Remove Your Service

If you want to delete your service, run remove. This will delete all the AWS resources created by your project and ensure that you don't incur any unexpected charges. It will also remove the service from Serverless Dashboard.

sls remove

More details on the remove command can be found here.


What's Next

Here are some helpful resources for continuing with the Serverless Framework:


Community

components's People

Contributors

agutoli avatar astuyve avatar austencollins avatar brianneisler avatar bytekast avatar canmengfly avatar davidwells avatar dilantha111 avatar donhui avatar eahefnawy avatar ebisbe avatar everlastingbugstopper avatar francismarcus avatar hkbarton avatar jfdesroches avatar laardee avatar medikoo avatar nikgraf avatar pavel910 avatar pgrzesik avatar plfx avatar pmuens avatar raeesbhatti avatar rupakg avatar skierkowski avatar timqian avatar tinafangkunding avatar vkkis93 avatar yugasun avatar zongumr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

components's Issues

aws-lambda runtime

Runtime is always node6

flickrPhotosFunction:
    type: aws-lambda
    inputs:
      name: enricbgFlickrPhotos
      runtime: nodejs8.10
      memory: 512
      timeout: 10
      handler: code/photos.handler

Despite running components deploy the runtime at my lambdas is always the older version of node ( v6 ). I have to set it manual in my aws console. After each deploy it resets to v6

Independently settable outputs

Description

Having outputs returned by every function in a component hasn't turned out to be the best implementation.

Outputs need to be persisted across calls to different functions, settable by any function, but not required to return outputs.

Instead, we should introduce a function called setOutputs for setting the outputs. This method should accept an object similar to saveState

Requirements

  • Persist outputs in state across calls
  • set outputs at the start when the state is loaded and the components are instantiated
  • update the outputs when setOutputs is called
  • setOutputs should merge the given object with the existing outputs
  • setOutputs should return the context for chaining

Example

type: my-component
outputTypes:
  foo:
    type: string
    description: foo output
  bar:
    type: string
    description: bar output
async function deploy(inputs, context) {
  doSomething()
  context = context.setOutputs({
    foo: 'abc'
  })

  doSomethingElse()
  context = context.setOutputs({
    bar: 'def'
  })

  // no longer support returning outputs.
}

Add aws-sqs-queue component

Description

Add an aws-sqs-queue component for setting up an AWS SQS queue.

Inputs

  • name: string (max: 80 chars, [a-zA-Z0-9-_])
  • policy: AWSPolicyDocument (optional)
  • delaySeconds: int (0-900, optional)
  • maximumMessageSize: int (1024-262144, optional, default: 262144)
  • receiveMessageWaitTimeSeconds: int (0-20, optiona, default 0)
  • redrivePolicy: RedrivePolicy (optional)
  • visibilityTimeout: int (0-43000, optional, default: 30)
  • kmsMasterKey: MasterKeyId | MasterKey
  • kmsDataKeyReusePeriodSeconds: int (60-86400, optional, default: 300)

Outputs

  • arn: AwsARN
  • url: URL

Requirements

  • tests
  • documentation
  • examples

Return state, not outputs

This is a minor, minor suggestion...

One aspect of this implementation that makes it simple is that components only return state. To ensure everyone is aware of this and how simple it is, the initial components we make should consider not introducing an outputs concept/variable and instead should just return state or updatedState. The initial components will serve as examples for others looking to author their own components.

Google Cloud Bigtable component

Description

Implement a google-cloud-bigtable component (Product link).

API

The component inputTypes should match the parameters which can be defined in the APIs request body.

  • Where is the API endpoint for this?

serverless.yml

# TBD

AWS CloudFormation Stack component

Description

To help users integrate existing cloudformation stacks into a components deployment, we should implement an aws-cloudformation-stack component.

Here is the AWS js sdk documentation for creating a a CloudFormation Stack https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/CloudFormation.html#createStack-property

inputTypes

stackName: string /* required */
capabilities: Array [ CAPABILITY_IAM | CAPABILITY_NAMED_IAM ],
clientRequestToken: string
disableRollback: boolean
enableTerminationProtection: boolean
notificationARNs: Array<string>,
onFailure: enum(DO_NOTHING | ROLLBACK | DELETE)
parameters: Array<{ { ParameterKey: string, ParameterValue: string, ResolvedValue: string, UsePreviousValue: boolean } /* more items */ }>
resourceTypes: Array<string>
roleArn: string
rollbackConfiguration: { MonitoringTimeInMinutes: 0, RollbackTriggers: [ { Arn: 'STRING_VALUE', /* required */ Type: 'STRING_VALUE' /* required */ }, /* more items */ ] }
stackPolicyBody: string
stackPolicyUrl: string
tags: Array<{ Key: 'STRING_VALUE', /* required */ Value: 'STRING_VALUE' /* required */ }>, templateBody: string templateUrl: string timeoutInMinutes: integer`

outputTypes

stackId: string

Problem with installation (Node v8.1.2)

Installation command:
npm install --global serverless-components

Didn't work with:
(1) Node v8.1.2
(2) NPM 5.6.0
(3) On MacOS Sierra

Solved after upgrading to Node v9.11.1, NPM 5.8.0.

Stack Trace:

components $ npm install --global serverless-components
/usr/local/bin/components -> /usr/local/lib/node_modules/serverless-components/bin/components

> [email protected] postinstall /usr/local/lib/node_modules/serverless-components
> node ./scripts/postinstall.js

/usr/local/lib/node_modules/serverless-components/src/utils/index.js:16
  ...components,
  ^^^

SyntaxError: Unexpected token ...
    at createScript (vm.js:74:10)
    at Object.runInThisContext (vm.js:116:10)
    at Module._compile (module.js:533:28)
    at Object.Module._extensions..js (module.js:580:10)
    at Module.load (module.js:503:32)
    at tryModuleLoad (module.js:466:12)
    at Function.Module._load (module.js:458:3)
    at Module.require (module.js:513:17)
    at require (internal/module.js:11:18)
    at Object.<anonymous> (/usr/local/lib/node_modules/serverless-components/scripts/postinstall.js:7:19)
npm ERR! code ELIFECYCLE
npm ERR! errno 1
npm ERR! [email protected] postinstall: `node ./scripts/postinstall.js`
npm ERR! Exit status 1
npm ERR!
npm ERR! Failed at the [email protected] postinstall script.
npm ERR! This is probably not a problem with npm. There is likely additional logging output above.

npm ERR! A complete log of this run can be found in:

Add support for an authorizer property to the rest-api component

Description

It would be great to have support for authorizers in the rest-api component. Our rest-api component dynamically creates either an aws-apigateway under the hood or an event-gateway depending upon the gateway input.

https://github.com/serverless/components/blob/master/registry/rest-api/index.js#L162-L166

For now, we can only add support for this authorizer property to the api-gateway portion until we add support for authorizers to the event-gateway.

The aws-apigateway component is built using the importRestApi method in the sdk which uses swagger to define the api.

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/APIGateway.html#importRestApi-property

The component takes in the inputs and converts them into a swagger definition.

https://github.com/serverless/components/blob/master/registry/aws-apigateway/index.js#L24-L27

The implementation would use the api gateway swagger extensions for adding authorizers to an api.

https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-swagger-extensions-authorizer.html

An example of what the final implementation would look like to use.

type: my-api
version: 0.0.1

components:
  createData:
    type: aws-lambda
    inputs:
      handler: index.create
      root: ${self.path}/code
  myAuthorizer:
    type: aws-lambda
    inputs:
      handler: index.authorizer
      root: ${self.path}/code

  myApi:
    type: rest-api
    inputs:
      gateway: aws-apigateway
      routes:
        /create:
          post:
            function: ${createFakerData}
            authorizer: ${myAuthorizer}  # this can also be an arn or another type of authorizer
            cors: true

rest-api: Deploying multiple http verbs for the same route deploys only the last verb

Deploying multiple http verbs for the same route deploys only the last verb. In the example below will only be available the GET /products.
Taken from the example https://serverless.com/blog/how-create-rest-api-serverless-components/

# ... snip

components:
  # ...snip
  productsApi:
    type: rest-api
    inputs:
      gateway: aws-apigateway
      routes:
        /products:
          post:
            function: ${createProduct}
            cors: true
          get:
            function: ${listProducts}
            cors: true         

How to reuse component in another projects?

Hi.

Thanks for working such a promising project. I've taken a look and have some questions regarding the example project retail-app :

  1. Can I use e.g. productsDb component in another projects (whether made by Serverless framework or Serverless Components)? If so, how should I refer/use to productsDb? I mean, if I just use its ARN then can I use it inside my other projects?!

  2. Can I modify IAM permissions to components? If so, how?

Syntax Conventions Should Be Stated Clearly

Making sure everyone understand component syntax conventions will help collaboration.

Suggestion: List these in the README, like this:

Configuration

Use camel-case, even for acronyms.

Example:

inputs:
  name:
  endpointUrl: 

Component Types

Use lowercase with hyphens

Example:

type: aws-api-gateway

Component Aliases

Use camel-case, even for acronyms.

Example:

components:
  myApi:
    type: aws-api-gateway

Add aws-sqs-fifo-queue component

Description

Add an aws-sqs-fifo-queue component for setting up an AWS SQS Fifo queue.

Inputs

  • name: string (max: 75 chars, [a-zA-Z0-9-_]) (automatically adds the .fifo suffix
  • policy: AWSPolicyDocument (optional)
  • delaySeconds: int (0-900, optional)
  • maximumMessageSize: int (1024-262144, optional, default: 262144)
  • receiveMessageWaitTimeSeconds: int (0-20, optiona, default 0)
  • redrivePolicy: RedrivePolicy (optional)
  • visibilityTimeout: int (0-43000, optional, default: 30)
  • kmsMasterKey: MasterKeyId | MasterKey
  • kmsDataKeyReusePeriodSeconds: int (60-86400, optional, default: 300)
  • contentBasedDeduplication: boolean

Outputs

  • arn: AwsARN
  • url: URL

Requirements

  • tests
  • documentation
  • examples

Component Diff'ing

Component authors need a way to understand what inputs have changed by users, so they can write better deployment logic.

Creating this thread to start that conversation.

Can upload zipped lambda package, but deploying as component throws RequestEntityTooLargeException on the CreateFunction operation.

Description

Strangely, I can add a lambda function to AWS manually, but using that function as a component in an app raises the following when running components deploy

RequestEntityTooLargeException: Request must be smaller than 69905067 bytes for the CreateFunction operation

Here is a link to the component's yml

The zipped serverless package is well below the CreateFunction limit of 69905067 bytes, so I was wondering if this was a components issue.

Additional Data

node: v10.5.0

Custom typing system

Description

There are cases where we need to support custom types and custom validation rules for types.

raml-validate supports specifying custom types as well as specifying custom rules https://github.com/mulesoft-labs/node-raml-validate

However, it might make sense to change out this library for a more robust on like this https://www.npmjs.com/package/raml-typesystem

We should add support for setting up custom types in yaml. This work is based off the RAML types spec https://github.com/raml-org/raml-spec/blob/master/versions/raml-10/raml-10.md#raml-data-types

Example

type: my-component

inputTypes:
  foo:
    type: Foo
    default:
      bar: 123

types:
  Foo:
    type: object
    properties:
      bar: string 

Update the aws-s3-bucket component to mirror the AWS sdk

Description

The current implementation of the aws-s3-bucket expects a name input. This does not match the expected bucket property that AWS expects in its API. We are also missing support for many of the additional parameters that the sdk supports.

https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#createBucket-property

params (Object)
  ACL — (String) The canned ACL to apply to the bucket. Possible values include:
    "private"
    "public-read"
    "public-read-write"
    "authenticated-read"
  Bucket — (String)
  CreateBucketConfiguration — (map)
  LocationConstraint — (String) Specifies the region where the bucket will be created. If you don't specify a region, the bucket will be created in US Standard. Possible values include:
    "EU"
    "eu-west-1"
    "us-west-1"
    "us-west-2"
    "ap-south-1"
    "ap-southeast-1"
    "ap-southeast-2"
    "ap-northeast-1"
    "sa-east-1"
    "cn-north-1"
    "eu-central-1"
  GrantFullControl — (String) Allows grantee the read, write, read ACP, and write ACP permissions on the bucket.
  GrantRead — (String) Allows grantee to list the objects in the bucket.
  GrantReadACP — (String) Allows grantee to read the bucket ACL.
  GrantWrite — (String) Allows grantee to create, overwrite, and delete any object in the bucket.
  GrantWriteACP — (String) Allows grantee to write the ACL for the applicable bucket.

In addition to supporting the above properties. We should also support blind pass through of the "rest" of the inputs so that we have a fallback when our implementation drifts from the SDK.

const createBucket = async ({ name, ...rest }) => S3.createBucket({ Bucket: name, ...rest }).promise()

Something that we will need to consider here is that all of our inputs use lower camel case where most of AWS uses upper camel case. Perhaps we should also do an auto conversion of lower to upper.

https://github.com/SamVerschueren/uppercamelcase

const createBucket = async ({ name, ...rest }) => {
  const params = reduceObjIndexed(
    (accum, value, key) => assoc(upperCammelCase(key), value, accum), 
    {}, 
    rest
  )
  return S3.createBucket({ Bucket: name, ...params).promise()
}

Serverless Variables - Improve Referencing Current Component Attributes

Problem

I tried using the AWS Lambda Component, however it currently hardcodes the current working directory as the folder to be packaged for the function, which is a bad practice. This forces the Lambda Component to package up EVERYTHING in my top-level Component, whether it's related to my function or not, resulting in a bloated, slow and potentially non-working Lambda Function. If users can't point to a specific directory where their Lambda code exists, then the Lambda Component isn't very usable.

Problem Example

  • Component A – Top-level component
    • Component B – Lambda code located here
      • Component C – Lambda component

I need to pass the Lambda Code in Component B via an input to Component C, the Lambda Component. Currently, the current working directory is hardcoded as the path of the Lambda code in the Lambda Component, meaning everything in Component A will be packaged into the function. This misses the Lambda code completely and instead fills it with unrelated files from Component A.

Potential Solution

There needs to be a way to pass in relative paths. If Serverless Variables had some built-in helpers, this would solve the problem well. Users can use them to address the above problem, like this:

Potential Solution Example

Component B

components:
  function:
    type: aws-lambda
    inputs:
      handler: index.code
      handlerRoot:  ${self.path}/code

${self} could be expanded upon to offer info about the current component, in a consistent way:

  • ${self.path} - Points to the path of the current component
  • ${self.inputs.foo} – Reference an input of the current component
  • ${self.name} – Reference the name of the current component
  • ${self.version} – Reference the version of the current component

Installation failing on postinstall script (Node v10.0.0 and NPM 6.0)

Installing via npm with npm i -g serverless-components fails due to the dtrace-provider module failing to build.

It seems like there could be conflicting dependencies for dtrace-provider. I also did some digging that mentioned dtrace-provider possibly looking in the wrong global node_modules folder (possibly an nvm issue?).

In any case, it doesn't seem like a problem specifically with serverless-components, but I'm curious to see if anyone has run into this issue and might be able to shed some light on it.

Stack trace follows:

node ./scripts/postinstall.js

npm WARN deprecated [email protected]: Use uuid module instead

  • @serverless-components/[email protected]
    added 12 packages from 11 contributors and updated 1 package in 55.13s

[email protected] install /Users/manafount/.nvm/versions/node/v10.0.0/lib/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider
node scripts/install.js

---------------░░░░⸩ ⠏ postinstall: sill install executeActions
Building dtrace-provider failed with exit code 1 and signal 0
re-run install with environment variable V set to see the build output
---------------░░░░⸩ ⠏ postinstall: sill install executeActions

  • @serverless-components/[email protected] executeActions
    added 47 packages from 108 contributors and updated 1 package in 62.828s
  • @serverless-components/[email protected]
    added 16 packages from 70 contributors and updated 1 package in 64.346s
  • @serverless-components/[email protected]
    added 13 packages from 12 contributors and updated 1 package in 65.034s
  • @serverless-components/[email protected]
    added 1 package from 4 contributors and updated 1 package in 65.621s
  • @serverless/[email protected]
    updated 1 package in 65.717s
  • @serverless-components/[email protected]
    added 1 package from 5 contributors and updated 1 package in 66.089s
  • @serverless-components/[email protected]
    added 54 packages from 86 contributors and updated 1 package in 68.07s
  • @serverless-components/[email protected]
    added 1 package from 4 contributors and updated 1 package in 68.408s
  • @serverless-components/[email protected] install executeActions
    added 12 packages from 9 contributors and updated 1 package in 69.391s
  • [email protected]
    added 36 packages from 36 contributors and updated 1 package in 70.681s
  • [email protected]
    updated 3 packages in 81.275s

AWS VPC component

Description

Users may need to create a VPC in AWS, whether for their Lambda functions or for Fargate containers.

AWS SDK for JS documentation on CreateVPC call

inputTypes

cidrBlock - string
amazonProvidedIpv6CidrBlock - boolean
instanceTenancy - string. Choices: default, dedicated or host.

outputTypes

vpcId - string

Secrets and Encryption

Here are some thoughts about secrets and encryption that could be implemented to the Components. With encrypted state.json and encrypted inline secrets, working with version control would be easier.

By default, encryption could use Node.js native crypto module with aes-256-cbc algorithm. Optionally user could define AWS KMS, GCP KMS or similar service to handle the encryption and key. These optional methods could be implemented to the core or some kind of plugins or extensions.

Encrypted state.json

With command components encrypt --state the state file is encrypted, and after that, it is encrypted every time before written to disk or other storage.

example encrypted state.json

{
  "encrypted": "ckE+aBv0Ja9QK+GuvVMiq4+5O0OEk9+LFR1VBK+OUS4CXh02gFOJZyKF/qCFBOUcUYV6ho6ontyoQFBED6SjUbMywnS+gZ2wtyq7XMMzMhUVjtPMEO6cbalT4SRImXe5J7S0g66XnrFmIyMB8JWbkj3uqUvZgcXLSBOdAbHZZdx7z/ktXKAxtGjCu9NWK8eNFKpVV+5osnWqVMAoezTOYg=="
}

components decrypt --state would do the opposite.

Inline variables in serverless.yml

Using encrypted values in serverless.yml would allow it to be committed to version control with inline secrets.

Running components encrypt --variable some-secret-value would output the encrypted value e.g. E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=. That value then can be used in serverless.yml as !encrypted type.

type: my-service

inputs:
  secretToken: !encrypted |
    E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=

!encrypted type is implemented as a custom js-yaml type + schema in the Components codebase. The schema is passed to @serverless/utils readFile->parseFile, so that the decryption is done while parsing the yaml data to js object.

components decrypt --variable E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw= decrypts the value and displays it as plain text.

There could be also an option to encrypt values that are defined in serverless.yml, e.g. components encrypt --path inputs.secretToken would replace the value of

inputs:
  secretToken: some-secret-value

with

inputs:
  secretToken: !encrypted E5oQO0xd3nmkk/tKuaenjrMKS3XmCJIKa+wVgfzU7kw=

Any thoughts?

@brianneisler @eahefnawy @pmuens @ac360

[WIP] Programmatic UX Improvements

Collecting thoughts on UX improvements around the programmatic usage of Components...

Current Implementation

  let iam      = await context.loadComponent('iam')
  let iamState = await iam.deploy(inputs, state, context, options)
  • User is responsible for passing state and context into component dependency.
  • Not sure how they get state for that dependency.

Proposal 1

  let myIamRole      = await context.loadComponent('myIamRole')
  let myIamRoleState = await myIamRole.deploy(inputs, options)
  • Order or arguments is switched to inputs, options, state, context
  • state and context are generated by framework middleware allowing them to be component-specific. The user doesn't need to worry about this.
  • Should load by alias, not by Component type, since there may be multiple instances.
  • Tricky part is the Framework needs to know where the dependency is in the hierarchy so that it can fetch the right state.

Middleware can be added like this:

Create a wrapper function over each component method that gets that component's state and prepares context specifically for that component.

component.[method] = function(inputs, options) {

      // State
      let state = loadState(src)

      // Populate any serverless variables in inputs with any info that is currently available

      // Prep context with useful info that is component specific
      let context = {
          alias:             alias,
          type:              type,
          version:           version,
          namespace:         namespace,
          action:            method,
          parentComponent:   parentComponent,
          lastDeploy:        state.modified,
          load:              function(){},
          cliPrompt:         function(){},
          // etc...
      }

      return originalMethod(inputs, options, state, context)
}

Support a `stage` option on deploy

Description

Similar to the framework, components need to be able to support the concept of stages. Unlike the framework, this should not be supported as a a config option in the yaml since the point of components is reusability and this prevents components from being reusable.

component: netlify-site keys issue

I'm not sure that it's the desired behaviour but each time the netlify-site components is deployed generates a new key in the github account.

I assume that you would reuse the first one created.

Add an aws-sns-topic component

Description

Implement a basic aws-sns-topic component.

Inputs

Outputs

  • arn: string

Types

SNSDeliveryPolicy
example pulled from here https://docs.aws.amazon.com/sns/latest/api/API_SetTopicAttributes.html

{ 
  "http": {
    "defaultHealthyRetryPolicy": { 
       "minDelayTarget": <int>,
       "maxDelayTarget": <int>,
       "numRetries": <int>, 
       "numMaxDelayRetries": <int>, 
       "backoffFunction": "<linear|arithmetic|geometric|exponential>" 
    }, 
    "disableSubscriptionOverrides": <boolean>, 
    "defaultThrottlePolicy": { 
      "maxReceivesPerSecond": <int> 
    }
  } 
}

AWSTopicPolicyDocument

  • TODO: need to figure out what this looks like

Requirements

  • tests
  • documentation
  • examples

Reformat types to be upper camel case

Description

Currently the components types are in kebab-case. This is limiting at the code level since this format cannot be represented in javascript properties.

Cannot find module DTraceProviderBindings

I have updated my brew packages in MacOs and now it throws that error when executing the deploy. I have reinstalled node, npm, yarn and components-serverless but this error still shows. I'm not sure whats broken and if it's related to that package.

node: v10.1.0
npm: 5.60
yarn: 1.6.0
mac os : 10.13.4

⌈  ➜   ~/Development/decoralyte/products-rest-api
⌊☻ components deploy
(node:15393) ExperimentalWarning: The fs.promises API is experimental
{ Error: Cannot find module './build/Release/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }
{ Error: Cannot find module './build/default/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }
{ Error: Cannot find module './build/Debug/DTraceProviderBindings'
    at Function.Module._resolveFilename (internal/modules/cjs/loader.js:571:15)
    at Function.Module._load (internal/modules/cjs/loader.js:497:25)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dtrace-provider/dtrace-provider.js:17:23)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/bunyan/lib/bunyan.js:34:22)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/lib/index.js:14:20)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/node_modules/dynamodb/index.js:3:18)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at Object.<anonymous> (/Users/enricu/.config/yarn/global/node_modules/serverless-components/registry/aws-dynamodb/index.js:3:16)
    at Module._compile (internal/modules/cjs/loader.js:678:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:689:10)
    at Module.load (internal/modules/cjs/loader.js:589:32)
    at tryModuleLoad (internal/modules/cjs/loader.js:528:12)
    at Function.Module._load (internal/modules/cjs/loader.js:520:3)
    at Module.require (internal/modules/cjs/loader.js:626:17)
    at require (internal/modules/cjs/helpers.js:20:18)
    at getComponentFunctions (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentFunctions.js:6:11)
    at getComponentsFromServerlessFile (/Users/enricu/.config/yarn/global/node_modules/serverless-components/src/utils/components/getComponentsFromServerlessFile.js:42:12)
    at process._tickCallback (internal/process/next_tick.js:68:7) code: 'MODULE_NOT_FOUND' }

Create set of generic component tests

Description

There are somethings that all components should adhere to that we need to ensure don't regress.

It would be great to have a general set of tests that get run against every component during testing.

These tests should cover

idempotent removals

  • removing more than once shouldn't cause an error
  • removing when the state says that there's a resource but it's actually gone from the provider shouldn't cause an error
  • removing when the state say that there's not a resource but there is should remove the resource?

idempotent deploys

  • deploying more than once with no changes shouldn't cause an error
  • deploying something when the state doesn't think that the resource exists
  • deploying something when the state thinks that the resource exists but it doesn't should result in the resource being created

inputs

  • expected inputs should throw an error when the type doesn't match
  • any unexpected inputs should be passed through to the deploy method

outputs

  • deploying should always result in the expected outputs listed in outputTypes

what else?

  • anything?

Getting started - slack, docs, dependency resolution, custom components, production

My main issue with the Serverless Framework is that .yml files get out of hand. In theory, serverless components should be able to solve this. I have three questions to which I was unable to find the answer:

  1. The readme says that the slack channel is public, but when attempting to join it requires a @serverless.com email address.

  2. Is there a reason why the serverless docs do not mention anything regarding components at https://serverless.com/framework/docs/getting-started/ ?

  3. Do serverless components use stack sets internally (when using AWS)? How do they orchestrate the order of deletion / creation and resolve dependencies? E.g. if I update component B's version inside component A, and component A depends on component B, and component B fails to upgrade resulting in a rollback, does component A roll back too? Is it possible for two components to be in an invalid state?

In the README I read:

However the framework ensures that your state file always reflects the correct state of your infrastructure setup (even if something goes wrong during deployment / removal).

How can a local state file work reliably when a team of engineers work together? What if 2 developers deploy at the same time? Will they have different state files? The actual state is on the cloud, not on a local development machine. Do you put the state file in version control?

  1. It seems like provisioning is put back in the hands of the developer:

From your README:

const deploy = (inputs, context) => {
  // lambda provisioning logic
  const res = doLambdaDeploy()

  // return outputs
  return {
    arn: res.FunctionArn,
    name: res.FunctionName
  }
}

However, aren't you re-inventing Terraform here? Also, isn't the point of using Cloudformation that custom provisioning logic isn't necessary? If I want to use AWS, can I still rely on cloudformation to provision my lambda and return the arn as an output type?

  1. custom components & dependency resolution

if I create 2 components, and one component depends on the other:

type: someComponent

components:
  mySpecialThing:
    type: my-custom-component

How is someComponent able to resolve the location of my-custom-component, if I have not defined where it is? Is there a package.json file or something where you specify it as a dependency? Does it read from node_modules or similar?

Move version declarations out of component type property

Problem

Our current approach to version declarations has been to add them inline to the component usage like this...

components:
  myComponent:
     type: [email protected]

This has a number of issues associated with it.

  1. Versions suddenly need to be managed in multiple places when using the same component multiple times and it becomes very easy to make a version mistake.

Example

components:
  foo1:
     type: [email protected]
  foo2:
    type: [email protected]
  foo3: 
    type: [email protected]

If i upgrade the component, it's possible to make a mistake and accidentally miss one of the component versions

Example

components:
  foo1:
     type: [email protected]
  foo2:
    type: [email protected]
  foo3: 
    type: [email protected]   # whoops, forgot to update this
  1. If i'm using components programmatically within the component, I have no mechanism for statically declaring which components i'm using within the component. This means that component cannot programmatically download the necessary dependencies ahead of time, they actually have to download at runtime which is suboptimal.

Example

// index.js
function deploy(inputs, context) {
  const foo = context.load('[email protected]', inputs)  // no way to discover this without running the actual code
}

Solution

This all results in a situation where it dependency and version management is more difficult than it should be.

My suggestions for fixing this is to separately declare component dependencies from their actual usage.

type: my-component

dependencies:
  foo: ^1.0.0

components:
  myFoo:
    type: foo # this uses the version supplied in the dependencies above

This also gives us a mechanism for declaring components that are used programmatically

type: my-component

dependencies:
  bar: ^1.0.0 

# no declaration of components property
// index.js
function deploy(inputs, context) {
  const bar = context.load('bar', inputs)  // uses the version declared in serverless.yml dependencies
}

Add aws-sns-subscription components

Description

Add an aws-sns-subscription component for setting up a subscription with on an SNS topic.

Inputs

  • topic: aws-sns-topic | arn
  • protocol: enum(http, https, email, email-json, sms, sqs, application, lambda)
  • endpoint: string
    For the http protocol, the endpoint is an URL beginning with "http://"
    For the https protocol, the endpoint is a URL beginning with "https://"
    For the email protocol, the endpoint is an email address
    For the email-json protocol, the endpoint is an email address
    For the sms protocol, the endpoint is a phone number of an SMS-enabled device
    For the sqs protocol, the endpoint is the ARN of an Amazon SQS queue
    For the application protocol, the endpoint is the EndpointArn of a mobile app and device.
    For the lambda protocol, the endpoint is the ARN of an AWS Lambda function.

Outputs

  • arn: AwsArn

Requirements

  • tests
  • documentation
  • examples

run error

Description

node: v10.0.0
npm: 5.6.0

Additional Data

{ Error: ENOENT: no such file or directory, open '/usr/local/lib/node_modules/serverless-components/registry/[email protected]/serverless.yml'
errno: -2,
code: 'ENOENT',
syscall: 'open',
path: '/usr/local/lib/node_modules/serverless-components/registry/[email protected]/serverless.yml' }

AWS Components - Add credentials as inputs

Currently, the AWS Components do not have credentials as input types. They depend on the user to have AWS credentials as environment variables or the user to have their credentials stored locally in the root of their machine.

Enabling passing in credentials makes it explicit which credentials the component is using, which is especially helpful in a situation where the component has AWS components as children. It also enables components within the same project to be deployed to different AWS accounts.

Would love to see AWS credentials inputs on all AWS components as non-required input types.

Add aws-cloudwatch-metric-alarm component

Description

Add an aws-cloudwatch-metric-alarm component for setting up an AWS CloudWatch metric alarm.

Inputs

  • name: string

  • description: string

  • actionsEnabled: boolean

  • okActions: Array<string>

  • alarmActions: Array<string>

  • insufficientDataActions: Array<string>

  • metricName: string

  • namespace: string

  • statistic: string

  • extendedStatistic: string

  • dimensions: Array<{name: string (min:1, max: 255), value: string (min:1, max: 255)}>

  • period: int (10, 30, 60*x, max: 86400)

  • unit: enum
    "Seconds"
    "Microseconds"
    "Milliseconds"
    "Bytes"
    "Kilobytes"
    "Megabytes"
    "Gigabytes"
    "Terabytes"
    "Bits"
    "Kilobits"
    "Megabits"
    "Gigabits"
    "Terabits"
    "Percent"
    "Count"
    "Bytes/Second"
    "Kilobytes/Second"
    "Megabytes/Second"
    "Gigabytes/Second"
    "Terabytes/Second"
    "Bits/Second"
    "Kilobits/Second"
    "Megabits/Second"
    "Gigabits/Second"
    "Terabits/Second"
    "Count/Second"
    "None"

  • evaluationPeriods: int

  • datapointsToAlarm: int

  • threshold: float

  • comparisonOperator: enum
    "GreaterThanOrEqualToThreshold"
    "GreaterThanThreshold"
    "LessThanThreshold"
    "LessThanOrEqualToThreshold"

  • treatMissingData: enum (breaching, notBreaching, ignore, missing)

  • evaluateLowSampleCountPercentile: enum (evaluate, ignore)

Outputs

  • name: string

Requirements

  • tests
  • documentation
  • examples

Feature Request: Auto-load .env file

Could we auto-load .env files in the root of the parent component? This would allow users to specify environment variables in their serverless.yml without having to save them to their bash shell.
Otherwise, the user has to create custom deployment logic.

Overhaul Google Cloud Function component

Google Cloud Function component analysis

In #253 we've updated the GCF component so that it can be used to (re)deploy and remove Google Cloud Functions. While working on this we've found other issues / design flaws we need to tackle. Here's a quick writeup which describes improvements, problems and questions we've uncovered during this codebase analysis.


Implementation against new type system:

  • Add index.js file
  • Add serverless.yml file
  • Add package.json file
    • npm install archiver package
  • Add README.md file
    • Write documentation about inner-workings
    • Add reference to component in root README.md file
  • Implement getSinkConfig function
  • Implement pack function
    • Take care of proper shim handling
  • Implement deploy function
    • Implement create functionality
      • Implement bucket creation logic
    • Implement update functionality
      • Implement bucket cleanup logic
  • Implement remove function
    • Implement remove functionality
      • Implement "full" bucket cleanup logic
  • Write tests
    • index.test.js
    • ... (for all other files)

Todos (check after following implementation above):

  • Make it possible to switch from httpsTrigger to eventTrigger and vice versa
  • Make sure that component inputs support „all“ possible configurations (according to GCF docs)
    • availableMemoryMb
    • entryPoint
    • description
    • environmentVariables
    • eventTrigger
    • httpsTrigger
    • labels
    • maxInstances
    • name
    • network
    • runtime
    • sourceArchiveUrl
    • sourceRepository
    • sourceUploadUrl
    • timeout
  • Revisit components outputs and figure out which ones make sense (remove others)
  • Await function creation (this might slow down function creation)
  • Replace keyfile from inputs with clientEmail and privateKey
  • Restructure / further refactor codebase to make it even easier to (unit) test

Questions:

  • Is it possible for a Google Cloud Function to switch from a httpsTrigger to an eventTrigger without re-deploying it?
  • Should we wait for the response of function creations / removals?
  • Should we use the storage bucket component programatically behind the scenes or use our own, specialized implementation within our Google Cloud Function component?

Problems / Limitations:

  • One can only create a new bucket every 2 seconds (see: https://cloud.google.com/storage/quotas) how do we deal with that if we have a project which has many GCF where we create one bucket per function?

Gitlab Component

Description

I think it would be great to have Gitlab and Netlify support for static websites

Google Cloud Storage Bucket component

Description

Implement a google-cloud-storage-bucket component. Specifically, this component will be used to setup and manage a google cloud storage bucket. This component is considered a "low level" component should adhere as close as possible the Google's API for inserting a new bucket.

API

The component inputTypes should match the parameters which can be defined in the APIs insert operation

The components' deploy and remove methods should behave as idempotent operations.

For deploy, if a bucket with the name already exists, the component should attempt to update the bucket with the given inputs. If the bucket cannot be updated, an error should be thrown.

For remove, if a bucket with the given name does not exist an error should not be thrown, instead (if an error occurs, it should be swallowed and execution should be allowed to proceed.)

inputTypes

name - string

The name of the bucket. Must adhere to the bucket naming conventions

project - string

A valid API project identifier.

predefinedAcl - string (optional)

Apply a predefined set of access controls to this bucket.
Acceptable values are:

  • "authenticatedRead": Project team owners get OWNER access, and allAuthenticatedUsers get READER access.
  • "private": Project team owners get OWNER access.
  • "projectPrivate": Project team members get access according to their roles.
  • "publicRead": Project team owners get OWNER access, and allUsers get READER access.
  • "publicReadWrite": Project team owners get OWNER access, and allUsers get WRITER access.
predefinedDefaultObjectAcl - string(optional)

Apply a predefined set of default object access controls to this bucket.
Acceptable values are:

  • "authenticatedRead": Object owner gets OWNER access, and allAuthenticatedUsers get READER access.
  • "bucketOwnerFullControl": Object owner gets OWNER access, and project team owners get OWNER access.
  • "bucketOwnerRead": Object owner gets OWNER access, and project team owners get READER access.
  • "private": Object owner gets OWNER access.
  • "projectPrivate": Object owner gets OWNER access, and project team members get access according to their roles.
  • "publicRead": Object owner gets OWNER access, and allUsers get READER access.
projection - string(optional)

Set of properties to return. Defaults to noAcl, unless the bucket resource specifies acl or defaultObjectAclproperties, when it defaults to full. 
Acceptable values are:

  • "full": Include all properties.
  • "noAcl": Omit owner, acl and defaultObjectAclproperties.
userProject - string (optional)

The project to be billed for this request.

acl - array (optional)

Access controls on the bucket, containing one or more bucketAccessControls Resources.

billing - object (optional)

The bucket's billing configuration.

billing.requesterPays - boolean (optional)

When set to true, Requester Pays is enabled for this bucket.

cors - array (optional)

The bucket's Cross-Origin Resource Sharing (CORS) configuration.

cors[].maxAgeSeconds - integer (optional)

The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.

cors[].method - array<string> (optional)

The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".

cors[].origin - array<string> (optional)

The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".

cors[].responseHeader - array<string> (optional)

The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.

defaultObjectAcl - array<Object> (optional)

Default access controls to apply to new objects when no ACL is provided.

defaultObjectAcl[].entity - string (optional)

The entity holding the permission, in one of the following forms:user-userIduser-emailgroup-groupIdgroup-emaildomain-domainproject-team-projectIdallUsersallAuthenticatedUsersExamples:The user [email protected] would be [email protected] group [email protected] would be [email protected] refer to all members of the G Suite for Business domain example.com, the entity would be domain-example.com.

defaultObjectAcl[].role - string (optional)

The access permission for the entity. 
Acceptable values are:

  • "OWNER"
  • "READER"
encryption - object (optional)

Encryption configuration for a bucket.

encryption.defaultKmsKeyName (optional) - string

A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified.

labels - object (optional)

User-provided labels, in key/value pairs.

labels.(key) - string (optional)

An individual label entry.

lifecycle - object (optional)

The bucket's lifecycle configuration. See lifecycle management for more information.

location - string (optional)

The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.

logging - object (optional)

The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

logging.logBucket - string (optional)

The destination bucket where the current bucket's logs should be placed.

logging.logObjectPrefix - string (optional)

A prefix for log object names.

storageClass - string (optional)

The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage.
Acceptable Values

  • MULTI_REGIONAL
  • REGIONAL
  • STANDARD
  • NEARLINE
  • COLDLINE
  • DURABLE_REDUCED_AVAILABILITY
    If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.
versioning - object (optional)

The bucket's versioning configuration.

versioning.enabled - boolean (optional)

While set to true, versioning is fully enabled for this bucket.

website - object (optional)

The bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.

website.mainPageSuffix - string (optional)

If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.

website.notFoundPage - string (optional)

If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result.

outputTypes

acl - array

Access controls on the bucket, containing one or more bucketAccessControls Resources.

billing - object

The bucket's billing configuration.

billing.requesterPays - boolean

When set to true, Requester Pays is enabled for this bucket.

cors - array<Object>

The bucket's Cross-Origin Resource Sharing (CORS) configuration.

cors[].maxAgeSeconds - integer

The value, in seconds, to return in the Access-Control-Max-Age header used in preflight responses.

cors[].method - array<string>

The list of HTTP methods on which to include CORS response headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list of methods, and means "any method".

cors[].origin - array<string>

The list of Origins eligible to receive CORS response headers. Note: "*" is permitted in the list of origins, and means "any Origin".

cors[].responseHeader - array<string>

The list of HTTP headers other than the simple response headers to give permission for the user-agent to share across domains.

defaultObjectAcl - array<Object>

Default access controls to apply to new objects when no ACL is provided.

defaultObjectAcl[].bucket - string

The name of the bucket.

defaultObjectAcl[].domain - string

The domain associated with the entity, if any.
 

defaultObjectAcl[].email - string

The email address associated with the entity, if any.

defaultObjectAcl[].entity - string

The entity holding the permission, in one of the following forms:user-userIduser-emailgroup-groupIdgroup-emaildomain-domainproject-team-projectIdallUsersallAuthenticatedUsersExamples:The user [email protected] would be [email protected] group [email protected] would be [email protected] refer to all members of the G Suite for Business domain example.com, the entity would be domain-example.com.

defaultObjectAcl[].entityId - string

The ID for the entity, if any.
 

defaultObjectAcl[].etag - string

HTTP 1.1 Entity tag for the access-control entry.

defaultObjectAcl[].generation - long

The content generation of the object, if applied to an object.
 

defaultObjectAcl[].id - string

The ID of the access-control entry.
 

defaultObjectAcl[].kind - string

The kind of item this is. For object access control entries, this is always storage#objectAccessControl.  

defaultObjectAcl[].object - string

The name of the object, if applied to an object.
 

defaultObjectAcl[].projectTeam - object

The project team associated with the entity, if any.
 

defaultObjectAcl[].projectTeam.projectNumber - string

The project number.
 

defaultObjectAcl[].projectTeam.team - string

The team. Acceptable values are:"editors""owners""viewers"  

defaultObjectAcl[].role - string

The access permission for the entity. 
Acceptable values are:

  • "OWNER"
  • "READER"
defaultObjectAcl[].selfLink - string

The link to this access-control entry.

encryption - object

Encryption configuration for a bucket.

encryption.defaultKmsKeyName (optional) - string

A Cloud KMS key that will be used to encrypt objects inserted into this bucket, if no encryption method is specified

etag - string

HTTP 1.1 Entity tag for the bucket.  

id - string

The ID of the bucket. For buckets, the id and name properties are the same.
 

kind - string

The kind of item this is. For buckets, this is always storage#bucket.

labels - object

User-provided labels, in key/value pairs.

labels.(key) - string

An individual label entry.

lifecycle - object (optional)

The bucket's lifecycle configuration. See lifecycle management for more information.

lifecycle.rule - array<Object>

A lifecycle management rule, which is made of an action to take and the condition(s) under which the action will be taken.

lifecycle.rule[].action - object

The action to take.
 

lifecycle.rule[].action.storageClass - string

Target storage class. Required iff the type of the action is SetStorageClass.
 

lifecycle.rule[].action.type - string

Type of the action. Currently, only Delete and SetStorageClass are supported. Acceptable values are:"Delete""SetStorageClass"
 

lifecycle.rule[].condition - object

The condition(s) under which the action will be taken.

lifecycle.rule[].condition.age - integer

Age of an object (in days). This condition is satisfied when an object reaches the specified age.

lifecycle.rule[].condition.createdBefore - date

A date in RFC 3339 format with only the date part (for instance, "2013-01-15"). This condition is satisfied when an object is created before midnight of the specified date in UTC.

lifecycle.rule[].condition.isLive - boolean

Relevant only for versioned objects. If the value is true, this condition matches live objects; if the value is false, it matches archived objects.

lifecycle.rule[].condition.matchesStorageClass - array<string>

Objects having any of the storage classes specified by this condition will be matched.
Values include 

  • MULTI_REGIONAL
  • REGIONAL
  • NEARLINE
  • COLDLINE
  • STANDARD
  • DURABLE_REDUCED_AVAILABILITY.
     
lifecycle.rule[].condition.numNewerVersions - integer

Relevant only for versioned objects. If the value is N, this condition is satisfied when there are at least N versions (including the live version) newer than this version of the object.

location - string

The location of the bucket. Object data for objects in the bucket resides in physical storage within this region. Defaults to US. See the developer's guide for the authoritative list.

logging - object

The bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

logging.logBucket - string

The destination bucket where the current bucket's logs should be placed.

logging.logObjectPrefix - string

A prefix for log object names.

metageneration - long

The metadata generation of this bucket.

name - string

The name of the bucket.
 

owner - object

The owner of the bucket. This is always the project team's owner group.
 

owner.entity - string

The entity, in the form project-owner-projectId.

owner.entityId - string

The ID for the entity.
 

projectNumber - unsigned long

The project number of the project the bucket belongs to.
 

selfLink - string

The URI of this bucket.
 

storageClass - string (optional)

The bucket's default storage class, used whenever no storageClass is specified for a newly-created object. This defines how objects in the bucket are stored and determines the SLA and the cost of storage.
Acceptable Values

  • MULTI_REGIONAL
  • REGIONAL
  • STANDARD
  • NEARLINE
  • COLDLINE
  • DURABLE_REDUCED_AVAILABILITY
    If this value is not specified when the bucket is created, it will default to STANDARD. For more information, see storage classes.
timeCreated - datetime

The creation time of the bucket in RFC 3339 format.
 

updated - datetime

The modification time of the bucket in RFC 3339 format.

versioning - object (optional)

The bucket's versioning configuration.

versioning.enabled - boolean (optional)

While set to true, versioning is fully enabled for this bucket.

website - object (optional)

The bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See the Static Website Examples for more information.

website.mainPageSuffix - string (optional)

If the requested object path is missing, the service will ensure the path has a trailing '/', append this suffix, and attempt to retrieve the resulting object. This allows the creation of index.html objects to represent directory pages.

website.notFoundPage - string (optional)

If the requested object path is missing, and any mainPageSuffix object is missing, if applicable, the service will return the named object from this bucket as the content for a 404 Not Found result.

minimal serverless.yml example of using this component

type: my-application
version: 0.0.2

components:
  myGoogleCloudStorageBucket:
    type: google-cloud-storage-bucket
    inputs:
      name: my-unique-bucket
      project: my-project

Implementation

  • tests written
  • inputs and outputs documented
  • example of using component added to examples folder

Input type aliases

Description

It would be convenient to be able to have alternate names for inputs that are aliases of the primary name.

type: aws-s3-bucket

inputTypes:
  bucket:
    type: string
    alias: 
      - 'name'

Which could be used like this...

type: my-component

components:
  myBucket:
    type: aws-s3-bucket
    inputs
      bucket:  my-bucket

or like this...

type: my-component

components:
  myBucket:
    type: aws-s3-bucket
    inputs
      name:  my-bucket

this would then show up in the inputs under the primary input name

const deploy = (inputs, context) => {
  console.log(inputs) // { bucket: 'my-bucket' }
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.