Coder Social home page Coder Social logo

hammerstonedev / sidecar Goto Github PK

View Code? Open in Web Editor NEW
799.0 14.0 55.0 496 KB

Deploy and execute AWS Lambda functions from your Laravel application.

Home Page: https://hammerstone.dev/sidecar/docs/main

License: MIT License

PHP 99.53% Shell 0.47%
laravel lambda serverless aws-lambda sidecar

sidecar's Introduction

Sidecar for Laravel

Tests Latest Stable Version Total Downloads License

Deploy and execute AWS Lambda functions from your Laravel application.

Read the full docs at hammerstone.dev/sidecar/docs.

Follow me on Twitter for more updates: twitter.com/aarondfrancis.

If you're a visual learner, watch the Laracasts series.

To install, simply require the package from composer: composer require hammerstone/sidecar

This package is still under development, please open issues for anything you run into.

What Sidecar Does

Sidecar packages, creates, deploys, and executes Lambda functions from your Laravel application.

You can write functions in any of the following runtimes and execute them straight from PHP:

  • Node.js 20
  • Node.js 18
  • Node.js 16
  • Python 3.12
  • Python 3.11
  • Python 3.10
  • Python 3.9
  • Python 3.8
  • Java 21
  • Java 17
  • Java 11
  • Java 8
  • .NET 8
  • .NET 7
  • .NET 6
  • Ruby 3.3
  • Ruby 3.2
  • OS-only runtime (Amazon Linux 2023)
  • OS-only runtime (Amazon Linux 2)

Any runtime that Lambda supports, you can use!

Sidecar is maintained by Aaron Francis, go follow me on Twitter!

What It Looks Like

Every Sidecar Function requires two things:

  • A PHP Class
  • Files that you want deployed to Lambda

For example, if we were wanting to use Node on Lambda to generate an og:image for all of our blog posts, we would first set up a simple class in PHP called OgImage.

App\Sidecar\OgImage.php

namespace App\Sidecar;

use Hammerstone\Sidecar\LambdaFunction;

class OgImage extends LambdaFunction
{
    public function handler()
    {
        // Define your handler function.
        return 'lambda/image.handler';
    }

    public function package()
    {
        // All files and folders needed for the function.
        return [
            'lambda',
        ];
    }
}

That's it! There are a lot more options, but that's all that is required.

The second thing you'd need is your function's "handler", in this case a javascript file.

Here's a simple JS file that could serve as our handler:

resources/lambda/image.js

const {createCanvas} = require('canvas')

exports.handler = async function (event) {
    const canvas = createCanvas(1200, 630)
    const context = canvas.getContext('2d')

    context.font = 'bold 70pt Helvetica'
    context.textAlign = 'center'
    context.fillStyle = '#3574d4'

    // Read the text out of the event passed in from PHP.
    context.fillText(event.text, 600, 170);

    // Return an image.
    return canvas.toDataURL('image/jpeg');
}

With those files created, you can deploy this function to Lambda:

php artisan sidecar:deploy --activate

And then execute it straight from your Laravel app!

web.php

Route::get('/ogimage', function () {
    return OgImage::execute([
        'text' => 'PHP to JS and Back Again!'
    ]);
});

Sidecar passes the payload from execute over to your Javascript function. Your Javascript function generates an image and sends it back to PHP.

Sidecar reduces the complexity of deploying small bits of code to Lambda.

Why Sidecar Exists

AWS Lambda is a powerful service that allows you to run code without provisioning or thinking about servers.

Laravel Vapor brought that power to Laravel. Using Vapor, you can run your plain ol' Laravel apps on a serverless platform and get incredible speed, security, and reliability.

Using Lambda through Vapor is a wonderful developer experience, but there are times when building your applications that you need to run just one or two Node functions for some reason. Common use cases could be taking screenshots with headless Chrome, generating images, or doing server-side rendering of your Javascript frontend.

Or maybe you want to run a Python script without configuring a server? Or a single Ruby script. Or even Java!

When running on a serverless platform, it's not quite as easy as installing Node and running your functions. You don't have access to the server! So you end up deploying a single Vercel or Netlify function and calling it over HTTP or just forgetting the thing altogether.

Sidecar brings the ease of Vapor to those non-PHP functions.

What Sidecar Doesn't Do

Sidecar does not handle any API Gateway, Databases, Caches, etc. The only thing Sidecar concerns itself with is packaging, creating, deploying, and executing Lambda functions.

Sidecar does not provide a way to execute a function via HTTP. You must execute it from your Laravel app through the provided methods.

If you need those other services, you are encouraged to use the instances that Vapor has set up for you, or set them up yourself.

sidecar's People

Contributors

aarondfrancis avatar adrianb93 avatar andresayej avatar bakerkretzmar avatar benbjurstrom avatar binaryk avatar bookwyrm avatar clarkeash avatar datashaman avatar drjdr avatar felixdorn avatar froelund avatar inxilpro avatar jelleroorda avatar joedixon avatar jryd avatar lukeraymonddowning avatar maurocasas avatar nexxai avatar nuernbergera avatar owenconti avatar sfioritto avatar stefanzweifel avatar stylecibot avatar tominal avatar w00key avatar wilsenhc avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sidecar's Issues

Activating in production fails while in local works

I'm deploying my app and got this:

[Sidecar] Activating function Noah\Sidecar\CompressPDF.                                               
            ↳ Environment variables not managed by Sidecar. Skipping.                                   
                                                                                                        
  In WrappedHttpHandler.php line 195:                                                                   
                                                                                                        
    Error executing "ListVersionsByFunction" on "https://lambda.us-west-2.amazo                         
    naws.com/2015-03-31/functions/SC-Noah-Club-staging-Noah-Sidecar-CompressPDF                         
    /versions?MaxItems=50"; AWS HTTP error: Client error: `GET https://lambda.u                         
    s-west-2.amazonaws.com/2015-03-31/functions/SC-Noah-Club-staging-Noah-Sidec                         
    ar-CompressPDF/versions?MaxItems=50` resulted in a `404 Not Found` response                         
    :                                                                                                   
    {"Type":"User","Message":"Function not found: arn:aws:lambda:us-west-2:8642                         
    72616696:function:SC-Noah-Club-staging-Noah-S (truncated...)                                        
     ResourceNotFoundException (client): Function not found: arn:aws:lambda:us-                         
    west-2:864272616696:function:SC-Noah-Club-staging-Noah-Sidecar-CompressPDF                          
    - {"Type":"User","Message":"Function not found: arn:aws:lambda:us-west-2:86                         
    4272616696:function:SC-Noah-Club-staging-Noah-Sidecar-CompressPDF"}                                 
                                                                                                        
                                                                                                        
  In RequestException.php line 113:                                                                     
                                                                                                        
    Client error: `GET https://lambda.us-west-2.amazonaws.com/2015-03-31/functi                         
    ons/SC-Noah-Club-staging-Noah-Sidecar-CompressPDF/versions?MaxItems=50` res                         
    ulted in a `404 Not Found` response:                                                                
    {"Type":"User","Message":"Function not found: arn:aws:lambda:us-west-2:8642                         
    72616696:function:SC-Noah-Club-staging-Noah-S (truncated...)

📢 What do you use Sidecar for?

Hey yall!

I'm looking for some more use cases to mention in the docs, as my "generate an image" ones are relatively uninspired.

So... what do you use Sidecar for? What use cases, and what runtimes?

Thank you 🙌

The variable outside `lambda handler` method is not preserved in the subsequences requests

I have a python script like this:

model = None

def get_model():
    global model

    if model is not None:
        return model

    model = SentenceTransformer('all-MiniLM-L6-v2')

    return model

def lambda_handler(event, context):
    model = get_model()

When the model = SentenceTransformer('all-MiniLM-L6-v2') is initialized it will download the library and this process takes a while. So I put it in the get_model method and return the model from the previous request for subsequence requests to save time.

However, when I execute the lambda function via sidecar within a route.

\App\Sidecar\Functions\Test::execute()

I got timeout for that request, so I put that code into a job and run it in a queue. This time, it ran successfully and I expected the get_model method would return the model from the 1st run in the queue when I re-visit the above route, but it didn't. However, it did return when I dispatched another job to queue (I knew that because this time the job ran much faster).

So I wonder if AWS Lambda creates a different instance for QUEUE and HTTP Request?

Default deployment user can't read Docker image from ECR

I ran into an issue where Sidecar couldn't use a Docker image I had uploaded to ECR, I got an AWS error saying that I didn't have permissions (it was a pretty generic-looking 403). I went into the AWS console and gave the sidecar-deployment-user user full ECR access and tried again and everything worked fine.

I think that the default deployment user created in the CreateDeploymentUser needs one or two of these permissions so that it can read/pull images in ECR in my account.

I'll try to PR something to include these this week once I've figured out which specific permissions it needs.

I also have almost no experience with AWS so I could easily be missing something here!

[bug] 'functionName' failed to satisfy constraint

Context: My APP_NAME has spaces in it:

APP_NAME='Name of the App'

Issue: I cannot deploy a function because the function name contains spaces from the APP_NAME:

1 validation error detected: Value 'SC-Name of the App-local-Sidecar-RenderOgImage' at 'functionName' failed to satisfy constraint: Member must satisfy regular expression pattern: (arn:(aws[a-zA-Z-]*)?:lambda:)?([a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\\d{1}:)?(\\d{12}:)?(function:)?([a-zA-Z0-9-_\\.]+)(:(\\$LATEST|[a-zA-Z0-9-_]+))?

Deleted Lambdas, cannot deploy anymore

Hey there!

Most amazing package in a long time.. but cannot seem to debug what I've done wrong.
I've had 2 functions and deleted them from my AWS to deploy fresh, and because I was getting really long delays on deploy when it's usually seconds..

I am now getting this error

Error executing "Invoke" on "https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/SC-E-local-Sidecar-Zoom-EnableCustomLivestreaming%3Aactive/invocations"; AWS HTTP error: Client error: `POST https://lambda.us-east-1.amazonaws.com/2015-03-31/functions/SC-E-local-Sidecar-Zoom-EnableCustomLivestreaming%3Aactive/invocations` resulted in a `404 Not Found` response:
{"Message":"Function not found: arn:aws:lambda:us-east-1:11111:function:SC-E-local-Sidecar-Zoom-EnableCustom (truncated...)
 ResourceNotFoundException (client): Function not found: arn:aws:lambda:us-east-1:111111:function:SC-E-local-Sidecar-Zoom-EnableCustomLivestreaming:active - {"Message":"Function not found: arn:aws:lambda:us-east-1:11111111:function:SC-E-local-Sidecar-Zoom-EnableCustomLivestreaming:active","Type":"User"}

Any idea how I can "clear" this issue?

Thanks again!

💡 "Fire and Forget" Execution of Sidecar Lambdas

We have a use case for Sidecar where we want to "fire and forget", where our function needs to do some work, but we don't care about the result.

As such, we don't want to be waiting around for the result.

The issue with the existing asynchronous functionality is that you still need to wait for the requests to settle, otherwise the function won't be invoked.

From some initial diving, the AWS SDK is using Guzzle under-the-hood and so this is probably a limitation of Guzzle more than it is this package or the SDK itself.

I think there's a valid use case here, where people have lambda function's that perform "background tasks" or "processing". These functions don't return anything but perform a set of actions and as such you don't want to hold up execution of your application whilst these run.
Some examples that spring to mind:

  • image processing
  • fetching meta data
  • audio/image/video optimisation

For our specific use case, we have an event that takes place in our system and we want to dispatch a sidecar function.
The sidecar function polls an API looking for a particular flag.
When that flag is present, it runs a new API call and then exits.

We've had this running as a queued job in our application, but when our queue is occasionally congested, the job doesn't run immediately on the queue.
We run on Vapor and given how the queues work, we cannot setup a separate queue that is invoked immediately or is given a higher priority.
The job we want to run is time-sensitive and so needs to run ASAP.
We use Sidecar for a couple things already and thought this could be a solution as we could asynchronously execute the Sidecar function and then continue on and have this run immediately in the background.

From some initial thinking, this would probably be best strung up with SNS and SQS whereby we notify a topic and then subscribe and invoke the Sidecar function.
We're happy to look at drafting a PR that would add this functionality, but before doing so, we wanted to see if this was something that you'd consider adding to Sidecar, and if so, whether there is a different approach that you might take.

Layers + node_modules issue

I might be doing this wrong however it seems when I use the @sparticuz/chrome-aws-lambda or any layer for that matter it negates everything that is inside my node_modules directory.

For example, I have these 2 lines of code

const chromium = require("@sparticuz/chrome-aws-lambda");
const { addExtra } = require('puppeteer-extra')
public function layers()
    {
        return [
            'arn:aws:lambda:us-west-1:764866452798:layer:chrome-aws-lambda:31'
        ];
    }

The lambda function can find "chrome-aws-lambda" using the layer but gives me this error when trying to find "puppeteer-extra"

If I get rid of the layer it will find the "puppeteer-extra"

Any thoughts on this or am I doing it completely wrong?

Suggestion: Could sidecar-deployment-user have more restricted permissions?

Was just looking at the inline policy attached to the IAM user, and noted that while the S3 permissions are limited to sidecar-provisioned resources, the Lambda permissions are wide open.

Wouldn't it be a good idea to limit these to e.g. Lambda functions with the "SC-" prefix?

Not sure about what "states:" is however. I looked up "Step Function" but found none provisioned on the account.

{
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::sidecar-*",
                "arn:aws:s3:::sidecar-*\/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "lambda:*",
                "states:*"
            ],
            "Resource": "*"
        },

Add Logs Command

We need to add a sidecar:logs command to pull logs from CloudWatch Logs. I've already added the CloudWatchLogs client, just need to write the command.

Support for cloudflare workers

I don't know how hard it would to support this and if this is even in the scope of this project.
It would be nice to have support for Cloudflare Workers. They have a quite generous free tier which could lead to more newcomers to start picking up this approach.

TypeError when deploying function when using SIDECAR_MEMORY or SIDECAR_TIMEOUT

I'm running into an issue when deploying my functions to AWS when using a custom SIDECAR_MEMORY or SIDECAR_TIMEOUT value. It seems the AWS SDK requires MemorySize and Timeout to be integer and not strings.

I could solve this by either defining the memory() or timeout() on my LambdaFunction-classes or by casting the values in the parent Hammerstone\Sidecar\LambdaFunction to integer by using (int) or the int return type.

AWS HTTP error: Client error: `PUT https://lambda.eu-central-1.amazonaws.com/xxx/configuration` resulted in a `400 Bad Request` response:
{"Message":"STRING_VALUE can not be converted to an Integer"}
 SerializationException (client): STRING_VALUE can not be converted to an Integer - {"Message":"STRING_VALUE can not be converted to an Integer"}

Would you be open for a PR to cast the values of memory() and timeout() to integers? (Can't remember if PHP 7.2 already supported the int return type 😅 )

Or could this be solved in a different place in the package?

Content of .env

# ...
SIDECAR_MEMORY=1024

Console Output

php artisan sidecar:deploy --activate
[Sidecar] Deploying App\Sidecar\MyFunction to Lambda as `SC-xxx-MyFunction`.
          ↳ Environment: local
          ↳ Runtime: nodejs10.x
          ↳ Function already exists, potentially updating code and configuration.
          ↳ Packaging files for deployment.
          ↳ Package unchanged. Reusing s3://xxx.

   Aws\Lambda\Exception\LambdaException

  Error executing "UpdateFunctionConfiguration" on "https://lambda.eu-central-1.amazonaws.com/xxx/functions/SC-xxx/configuration"; AWS HTTP error: Client error: `PUT https://lambda.eu-central-1.amazonaws.com/xxx/functions/SC-xxx/configuration` resulted in a `400 Bad Request` response:
{"Message":"STRING_VALUE can not be converted to an Integer"}
 SerializationException (client): STRING_VALUE can not be converted to an Integer - {"Message":"STRING_VALUE can not be converted to an Integer"}

  at vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:195
    191▕         $parts['request'] = $request;
    192▕         $parts['connection_error'] = !empty($err['connection_error']);
    193▕         $parts['transfer_stats'] = $stats;
    194▕
  ➜ 195▕         return new $this->exceptionClass(
    196▕             sprintf(
    197▕                 'Error executing "%s" on "%s"; %s',
    198▕                 $command->getName(),
    199▕                 $request->getUri(),

      +41 vendor frames
  42  artisan:37
      Illuminate\Foundation\Console\Kernel::handle(Object(Symfony\Component\Console\Input\ArgvInput), Object(Symfony\Component\Console\Output\ConsoleOutput))

Custom Runtimes

Hello, just wanted to make a recommendation. It would be awesome if we could build custom runtimes so that way we can build a lmabda function let's say for PHP instead of using node.js or python. On the other hand, what I want to suggest is also maybe building a runtime for PHP.

Infinite activation on Laravel Vapor

Hi @aarondfrancis!

Something strange occurs when the activate command is running during a deployment in Laravel Vapor. When I run this command locally, it's super fast, but on Vapor it hangs infinite till the process is killed after a couple of minutes. Also with forcing the production environment locally, everything goes fine.

Screenshot 2022-06-17 at 17 14 42

My vapor.yaml is like:

build:
 - ...
 - 'php artisan sidecar:deploy --env=production'
deploy:
 - ...
 - 'php artisan sidecar:activate --env=production'

Laravel Vapor does not show any errors but the deployment just 'fails'. They only thing AWS Cloudwatch tells me in the vapor-production-cli-log is:
Fatal error: Uncaught Symfony\Component\Process\Exception\ProcessSignaledException: The process has been signaled with signal "11". in /var/task/vendor/symfony/process/Process.php:434

Any idea what I'm doing wrong or where else I can look? Or did I just find a bug?

Thanks!

Jeffrey

Unable to deploy through Github Actions + Vapor

Hello!

Our CI goes through GH Actions, and we have set the proper repo secrets to validate it wasn't on our end.

InvalidArgumentException 

  Missing required client configuration options: 

region: (string)

  A "region" configuration value is required for the "lambda" service
  (e.g., "us-west-2"). A list of available public regions and endpoints can be
  found at http://docs.aws.amazon.com/general/latest/gr/rande.html.

  at vendor/aws/aws-sdk-php/src/ClientResolver.php:406
    402▕             $missing[] = $this->getArgMessage($k, $args, true);
    403▕         }
    404▕         $msg = "Missing required client configuration options: \n\n";
    405▕         $msg .= implode("\n\n", $missing);
  ➜ 406▕         throw new IAE($msg);
    407▕     }
    408▕ 
    409▕     public static function _apply_retries($value, array &$args, HandlerList $list)
    410▕     {

      +26 vendor frames 
  27  artisan:37
      Illuminate\Foundation\Console\Kernel::handle()

This is the vapor .yml

build:
    - 'composer install --classmap-authoritative'
    - 'php artisan event:cache'
    - 'php artisan route:clear'
    - 'php artisan view:clear'
    - 'composer dump-autoload'
    - 'npm install'
    - 'npm run dev && npm run admin:dev'
    - 'php artisan sidecar:deploy --env=canary'
    - 'rm -rf node_modules'
deploy:
    - 'php artisan migrate --force'
    - 'php artisan config:clear'
    - 'php artisan cache:clear'
    - 'php artisan config:cache'
    - 'php artisan sidecar:activate'
    - 'php artisan devops:post-deploy'

Currently running v0.3 + PHP8

Appreciate all the help!

Arm Support

Hey,

I wanted to check if you would be open to a PR adding support for arm64 lambdas in addition to the default x86_64?

[Question/Bug] Function Warming does not appear to be working

Hi, thanks for the work on this package.

I have a function deployed and it's working great. However, the warming does not appear to work.

It has a warmingConfig of:

return new WarmingConfig(2);

I have the warming command in my schedule, like so:

$schedule->command('sidecar:warm')->everyFiveMinutes();

I have also manually run the warm command, and also run the helper through tinker. No errors.

However, I can't see any requests coming through to my function for warming. If I trigger the function myself with a real payload, I see the request, it works fine. But obviously the first request is much longer because of a cold-start (12s versus 3s), hence the desire for warming.

Re-reading the documentation, I don't believe I have missed anything, but I also cannot find or raise any errors or exceptions from the warming function, so I'm not sure which stage is failing.

Any ideas?

Caching

If a function is invoked and

  1. the package hasn't changed and
  2. the payload hasn't changed

then the developer should be given the option to not hit Lambda and just return the cached result.

Support for Cloudflare Workers

I'm not sure if this is on the roadmap or what, though considering you support Lambda and Cloudflare Workers is doing the same thing in JS/Rust, I think it could be a good addition to support it to provide an alternative to running code on Amazon's infrastructure.

This package is heavily Lambda dependent and wasn't made to support multiple providers.

Would a PR to support Cloudflare Workers be accepted or is the scope of this package set on AWS Lambda only?

SettledResult::parseLogs does not handle multi-line log entries

If my sidecar function calls:

console.log(`Result: ${ JSON.stringify(result, null, 2) }`);

Then parseInfoLine fails on line 262:

$parts = explode("\t", $line);
if (count($parts) === 1) {
return [
'timestamp' => now()->timestamp,
'level' => 'UNKNOWN',
'body' => $parts[0]
];
}
$body = $parts[3];

I honestly don't have a great solution for this right now—if I come up with something I'll submit a PR. For others who are running into this issue, the solution for now is to just avoid any multi-line log entries.

404 region is null from one deploying previously working

Hi there,

After a regular deployment working well, I go Invoke Error for the last execution.

Any clues?

The error:

{
   "errorType":"NoSuchKey",
   "errorMessage":"The specified key does not exist.",
   "code":"NoSuchKey",
   "message":"The specified key does not exist.",
   "region":null,
   "time":"2022-10-17T16:32:48.782Z",
   "requestId":"G08H07FWS6TBNRZC",
   "extendedRequestId":"nz9DV0QxfZ1R5BY1L79ArTiZ+UwJ/qgDOx8Yp9eBhAODFPDMY+NjC1UaDW4bYtxg2HiUMekxdK31UmrLly+A/w==",
   "statusCode":404,
   "retryable":false,
   "retryDelay":82.75801602502443,
   "stack":[
      "NoSuchKey: The specified key does not exist.",
      "    at Request.extractError (/var/task/node_modules/aws-sdk/lib/services/s3.js:711:35)",
      "    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:106:20)",
      "    at Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:78:10)",
      "    at Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:686:14)",
      "    at Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10)",
      "    at AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12)",
      "    at /var/task/node_modules/aws-sdk/lib/state_machine.js:26:10",
      "    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)",
      "    at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:688:12)",
      "    at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:116:18)"
   ]
}```

Refactoring to RemoteFunction, RemoteProcess, CloudProcess or CloudThread package

Hi, I was thinking about the possibilities of this package and remote executions in general. The idea of this package is defined in readme.md however I think smaller package could be created to allow just execute code on remote cloud function provider (and handle deployments in non-verbose way).

The concept for this could be named a "process" or a "thread" or a "remote function" which is executed in various cloud function providers or environments (it could support drivers and each driver use own SDK such as AWS SDK or Azure SDK, ...). Such a package could be framework-agnostic. Then, sidecar can use such a package internally to handle execution and deployment.

Java Spring supports similar concept with adapters for AWS Lambda, Microsoft Azure, Apache OpenWhisk - https://spring.io/projects/spring-cloud-function

What do you think?

Allow for tagging

We have a requirement to add tagging (AWS tags) to sort of label our lambda functions. I think this is mostly for billing purposes, as different teams create lambdas and the bill needs to go to different places even though it's one big AWS account, but it might be useful to have tagging for other purposes as well.

cURL error 77: error setting certificate verify locations

Hi - I've been getting the below odd exception very sporadically (once every few hundred invocations of this lambda). For now this is a bit of a placeholder issue for others to search out, as a) its not really reproducible yet, and b) I'm not even sure its related to this package.

If no-one else pops up with similar issues, or if I find no other info, I'll close this in a few weeks. Below is the full exception:

   "message": "Error executing \"Invoke\" on \"https://lambda.ap-southeast-2.
amazonaws.com/2015-03-31/functions/SC-XXXXXX-production-Sidecar-Sharp
ResizeImageLambda%3Aactive/invocations\"; AWS HTTP error: cURL error 77: 
error setting certificate verify locations:\n  CAfile: /opt/lib/curl/cert.pem\n  
CApath: none (see https://curl.haxx.se/libcurl/c/libcurl-errors.html) for 
https://lambda.ap-southeast-2.amazonaws.com/2015-03-31/functions/SC-
XXXXXX-production-Sidecar-SharpResizeImageLambda%3Aactive/invocations",
   "code": 0,
   "file": "/tmp/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:195",
   "trace": [
       "/tmp/vendor/aws/aws-sdk-php/src/WrappedHttpHandler.php:97",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:204",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:169",
       "/tmp/vendor/guzzlehttp/promises/src/RejectedPromise.php:42",
       "/tmp/vendor/guzzlehttp/promises/src/TaskQueue.php:48",
       "/tmp/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php:158",
       "/tmp/vendor/guzzlehttp/guzzle/src/Handler/CurlMultiHandler.php:183",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:248",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:224",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:269",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:226",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:269",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:226",
       "/tmp/vendor/guzzlehttp/promises/src/Promise.php:62",
       "/tmp/vendor/hammerstone/sidecar/src/Results/PendingResult.php:48",
       "/tmp/vendor/hammerstone/sidecar/src/Manager.php:157",
       "/tmp/vendor/hammerstone/sidecar/src/Manager.php:156",
       ```

Constant 409 update in progress bug

We're facing a very strange issue which is constantly breaking out deploys.

We have to manually delete all the Lambda's before trying to deploy, otherwise we encounter the following:

Error executing "UpdateFunctionCode" on "https://lambda.***.amazona  
  ws.com/2015-03-31/functions/SC-Appname-staging-Sidecar-Dacast-CreateLivestrea  
  m/code"; AWS HTTP error: Client error: `PUT https://lambda.***.amazon  
  aws.com/2015-03-31/functions/SC-Appname-staging-Sidecar-Dacast-CreateLivestre  
  am/code` resulted in a `409 Conflict` response:                              
  {"Type":"User","message":"The operation cannot be performed at this time. A  
  n update is in progress for resource: arn:aws (truncated...)                 
   ResourceConflictException (client): The operation cannot be performed at t  
  his time. An update is in progress for resource: arn:aws:lambda:***:6  
  88009524528:function:SC-Appname-staging-Sidecar-Dacast-CreateLivestream - {"T  
  ype":"User","message":"The operation cannot be performed at this time. An u  
  pdate is in progress for resource: arn:aws:lambda:***:688009524528:fu  
  nction:SC-Appname-staging-Sidecar-Dacast-CreateLivestream"}   

We're a bit lost here, is there anything we're doing wrong? We have changed our Sidecar implementation in the last 3 months, it just started happening 4 days ago and it's becoming a hair puller.

Any guidance?

Support for Lambda environment variable configuration

Would be great if Sidecar could support adding environment variable configs to lambda functions.

I was using a LibreOffice layer which required me to add an environment variable: FONTCONFIG_PATH=/opt/etc/fonts.

[ASK] Using AWS Lambda Layers upon deployment

Hi, how do I configure AWS Lambda Layers?

I've added the layer manually in the Lambda function, but it always get emptied on each deployment (php artisan sidecar:deploy).

I've read the docs & source dive, but still couldn't figure it out.

Is it at least possible to deploy the changes in the code only, without emptying the Layers that I've set manually?

Thank you very much for this awesome package btw!

Laravel 9 Support?

I have created a PR to add L9 support.

I feel it is a bit naive but I didn't want to point out a problem without having a go at the solution.

Use or discard at your discretion.

Cheers.

Two lambda functions in one image

According with this issue: aws/aws-sam-cli#2576 (that was solved) i'm trying to have 2 lambda functions with only one image.

Succeeded it with SAM CLI without any problem, it created me 2 functions and one image.

Then I tried to use the imageUri with Sidecar and Package::CONTAINER_HANDLER.

It creates me one single function that throw "Handler not set" i'm assuming because no handler is specified in the dockerfile but in the template instead (as is it recommended in the issue above).

My question is: Does Sidecar support that and I don't found the solution? Or it simply not and I should use 2 images with 2 sidecar classes instead?

Deployment doesn't detect new container image build

I'm running a Sidecar function that uses a container image on ECR, and Sidecar doesn't seem to be able to detect that I've built and pushed up a new version of the container.

php artisan sidecar:deploy --activate tells me Function code and configuration are unchanged. Not updating anything. and then reuses the latest active version of my Lambda instead of creating/aliasing a new one.

The ImageUri hasn't changed because I'm using the latest tag.

Am I missing a step? Should I be tagging the new Docker image build with a date or something, instead of latest, so the Image URI changes?

Error executing "ListVersionsByFunction": Environment name repeated twice in lambda name

Hi - I'm sure I'm doing something wrong, but not quite sure what. I've been writing/testing lambdas locally successfully, and I'm now attempting to push to my staging env.

The CI flow is a github action (PHP based) that runs the vapor deploy staging command, then the deploy/activate divided between build & deploy stages as suggested in the docs.

The error I get at the 'activate' stage is:

Error executing "ListVersionsByFunction" on "https://lambda.***.  
  amazonaws.com/2015-03-31/functions/SC-Variant-Staging-staging-Sidecar-Sharp  
  ResizeImageLambda/versions?MaxItems=100"; AWS HTTP error: Client error: `GE  
  T https://lambda.***.amazonaws.com/2015-03-31/functions/SC-Varia  
  nt-Staging-staging-Sidecar-SharpResizeImageLambda/versions?MaxItems=100` re  
  sulted in a `404 Not Found

Looking at the lambdas deployed, I can see why the 404 occurs, as the only deployed staging version is 'SC-Variant-staging-Sidecar-SharpResizeImageLambda' (note the single, lowercase 'staging').

PHP runtime and Queueable code

Hi, is it possible to use custom PHP runtime such as bref.sh?

Then Queueable PHP code or Jobs (with packages) could be run in Lambdas.

Possibly, whole Laravel app could be deployed to Lambda with ex. sidecar:deploy --clone and all existing classes could be available including Jobs - Lambda would act like PHP process.

Or another interesting usecase; helper runInLambda(function() { /* any code */ }) will run the function in Lambda in the context of the cloned application.

Pre-warm

Once warming is in, there should be an option to warm functions between deploying and activating, so that most requests hit a warm container.

Better Error Messages for Env Mismatches

When a function is deployed in staging and then executed in prod, we should catch the 404 error and look to see if there is a similarly named function with the env swapped out. If there is, we should throw a better error.

Support for testing/mocking

It would be awesome if the package supported the ability to fake/mock a Sidecar execution, ie:

/** @test */
public function it_calls_sidecar_function()
{
    // Given
    $mockedResponse = [
        'hello' => 'world'
    ];
    SomeFunction::fake($mockedResponse);

    // When
    $this->get('/call-function')->assertSuccessful();

    // Then
    $expectedParams = [
        'foo' => 'bar'
    ];
    SomeFunction::assertExecuted($expectedParams);
}

The idea here would be to keep the faking pattern similar to the Laravel first-party fakes (Http, Queue, etc):

  • You call fake() with a mocked response for the lambda
  • You call your controller/command/job, etc that will execute the lambda
  • You call an assertion method to assert that the lambda was invoked with the given set of parameters

How To Deploy And Execute A Native Linux Binary

Hi! Thank you for this great package, which takes away so much of the annoying configuration work normally needed to create lambda functions manually!

I would like to ask some questions about something I thought could be realized by using lambda and the help of this package.

I have a compiled Linux binary that I can execute on my local linux 64Bit machine (Linux Mint 20.3 Cinnamon 64Bit; Kernel 5.4.0-100-generic). It gets a file as input and produces a different output file inside the folder it is executed.

Is there some way I could deploy a tool like this to Lambda by using sidecar?

There are some questions that come to my mind:
First of all and regardless of the specific file handling part of the program in question, I have not yet understood which kind of runtime I would need to choose and how I would actually execute such a native linux binary from inside the lambda function.
Second question is about the file handling bit. As I said, at the moment this program requires the files to be locally accessible via paths and it needs write access to the disk to store the result file.
So what are the possibilities I have to provide the files for the program to work.
Normally in my production setup the input and output files would actually be stored inside a s3 bucket, so is there any way I can provide them so the tool can use it?

How would you implement something like this?
I know for sure that some parts of this issue go way beyond the responsibilities of sidecar. I don't expect anybody to explain exactly how to do it, but maybe to give a short overview of the options and pitfalls and hopefully some further resources I can use to figure it out myself.

Thanks in advance to everyone who leaves some information!

Clean up old functions

Lambda has a 75gb limit on function storage, which includes all aliases. We need to be responsible for cleaning up old versions of our functions so that we don't cause problems for people.

Can sidecar be used to run dynamic Node.js code on lambda ?

Hey,

Im trying to figure out a way to give a user a code editor in my platform to insert node code (including imports packages) and then run this user defined code with Sidecar - actual deploy the custom code function (node.js based) run it and then delete it

Think about all the cloud editor platforms or automations platforms that give you run any node.js code as part of a workflow for example, that use case exactly.

There is any way (even with tweaks needed) to make that happen with Sidecar ?

Thanks

Support Layer Creation

Right now we support referencing layers that already exist, but it might be nice to support creating layers, in the same way we create packages for functions.

Error: Cannot find module 'image'. Running Laravel on localhost.

Is this can only be used with Vapor?

I've setup a local Laravel and follow your code implementation example, and I don't have any error when running the command php artisan sidecar:deploy --activate

But on the /ogimage, I got error saying -
Lambda Execution Exception for App\Sidecar\OgImage: "Error: Cannot find module 'image' Require stack: - /var/runtime/UserFunction.js - /var/runtime/index.js. [TRACE] Runtime.ImportModuleError: Error: Cannot find module 'image' Require stack: - /var/runtime/UserFunction.js".

Add Zip Sweeping

Right now ZIP archives will just pile up on S3. They are totally unnecessary after they are deployed to Lamba, so we need to add a sweep to clear old artifacts.

5 functions that take 30 seconds, split or keep together?

I'm building a scraper that scrapes 5 different websites for information, now I'm wondering

  • Is it better to create 5 separate sidecar functions, so 5 lambda functions, update the client of the progress after each step
  • Or create 1 big function that performs all 5 actions, which means I won't be able to update the client on the progress as easily

What is the best practise?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.