Coder Social home page Coder Social logo

aws-lambda-image's Introduction

aws-lambda-image

Build Status Code Climate Coverage Status npm version Join the chat at https://gitter.im/aws-lambda-image

An AWS Lambda Function to resize/reduce images automatically. When an image is put on AWS S3 bucket, this package will resize/reduce it and put to S3.

Requirements

  • Node.js ( AWS Lambda supports versions of 8.10 or later )

Important Notice

From nodejs10.x, AWS Lambda doesn't bundle ImageMagick and image related libraries.

https://forums.aws.amazon.com/thread.jspa?messageID=906619&tstart=0

Therefore, if you'd deploy with nodejs10.x runtime (but we prefer and default as it), it needs to install AWS Lambda Layer with this function. This project can support it automatically, see LAYERS in detail.

Preparation

Clone this repository and install dependencies:

git clone [email protected]:ysugimoto/aws-lambda-image.git
cd aws-lambda-image
npm install .

When upload to AWS Lambda, the project will bundle only needed files - no dev dependencies will be included.

Configuration

Configuration file you will find under the name config.json in project root. It's copy of our example file config.json.sample. More or less it looks like:

{
  "bucket": "your-destination-bucket",
  "backup": {
      "directory": "./original"
  },
  "reduce": {
      "directory": "./reduced",
      "prefix": "reduced-",
      "quality": 90,
      "acl": "public-read",
      "cacheControl": "public, max-age=31536000"
  },
  "resizes": [
    {
      "size": 300,
      "directory": "./resized/small",
      "prefix": "resized-",
      "cacheControl": null
    },
    {
      "size": 450,
      "directory": "./resized/medium",
      "suffix": "_medium"
    },
    {
      "size": "600x600^",
      "gravity": "Center",
      "crop": "600x600",
      "directory": "./resized/cropped-to-square"
    },
    {
      "size": 600,
      "directory": "./resized/600-jpeg",
      "format": "jpg",
      "background": "white"
    },
    {
      "size": 900,
      "directory": "./resized/large",
      "quality": 90
    }
  ]
}

Configuration Parameters

name field type description
bucket - String Destination bucket name at S3 to put processed image. If not supplied, it will use same bucket of event source.
jpegOptimizer - String Determine optimiser that should be used mozjpeg (default) or jpegoptim ( only JPG ).
acl - String Permission of S3 object. See AWS ACL documentation.
cacheControl - String Cache-Control of S3 object. If not specified, defaults to original image's Cache-Control.
keepExtension - Boolean Global setting fo keeping original extension. If true, program keeps orignal file extension. otherwise use strict extension eg JPG,jpeg -> jpg
backup - Object Backup original file setting.
bucket String Destination bucket to override. If not supplied, it will use bucket setting.
directory String Image directory path. Supports relative and absolute paths. Mode details in DIRECTORY.md
template Object Map representing pattern substitution pair. Mode details in DIRECTORY.md
prefix String Prepend filename prefix if supplied.
suffix String Append filename suffix if supplied.
acl String Permission of S3 object. See AWS ACL documentation.
cacheControl String Cache-Control of S3 object. If not specified, defaults to original image's Cache-Control.
keepExtension Boolean If true, program keeps orignal file extension. otherwise, use strict extension eg JPG,jpeg -> jpg
move Boolean If true, an original uploaded file will delete from Bucket after completion.
reduce - Object Reduce setting following fields.
quality Number Determine reduced image quality ( only JPG ).
jpegOptimizer String Determine optimiser that should be used mozjpeg (default) or jpegoptim ( only JPG ).
bucket String Destination bucket to override. If not supplied, it will use bucket setting.
directory String Image directory path. Supports relative and absolute paths. Mode details in DIRECTORY.md
template Object Map representing pattern substitution pair. Mode details in DIRECTORY.md
prefix String Prepend filename prefix if supplied.
suffix String Append filename suffix if supplied.
acl String Permission of S3 object. See AWS ACL documentation.
cacheControl String Cache-Control of S3 object. If not specified, defaults to original image's Cache-Control.
keepExtension Boolean If true, program keeps orignal file extension. otherwise, use strict extension eg JPG,jpeg -> jpg
resize - Array Resize setting list of following fields.
size String Image dimensions. See ImageMagick geometry documentation.
format String Image format override. If not supplied, it will leave the image in original format.
crop String Dimensions to crop the image. See ImageMagick crop documentation.
gravity String Changes how size and crop. See ImageMagick gravity documentation.
quality Number Determine reduced image quality ( forces format JPG ).
jpegOptimizer String Determine optimiser that should be used mozjpeg (default) or jpegoptim ( only JPG ).
orientation Boolean Auto orientation if value is true.
bucket String Destination bucket to override. If not supplied, it will use bucket setting.
directory String Image directory path. Supports relative and absolute paths. Mode details in DIRECTORY.md
template Object Map representing pattern substitution pair. Mode details in DIRECTORY.md
prefix String Prepend filename prefix if supplied.
suffix String Append filename suffix if supplied.
acl String Permission of S3 object. See AWS ACL documentation.
cacheControl String Cache-Control of S3 object. If not specified, defaults to original image's Cache-Control.
keepExtension Boolean If true, program keeps orignal file extension. otherwise, use strict extension eg JPG,jpeg -> jpg
optimizers - Object Definitions for override the each Optimizers command arguments.
pngquant Array Pngquant command arguments. Default is ["--speed=1", "256"].
jpegoptim Array Jpegoptim command arguments. Default is ["-s", "--all-progressive"].
mozjpeg Array Mozjpeg command arguments. Default is ["-optimize", "-progressive"].
gifsicle Array Gifsicle command arguments. Default is ["--optimize"].

Note that the optmizers option will force override its command arguments, so if you define these configurations, we don't care any more about how optimizer works.

Testing Configuration

If you want to check how your configuration will work, you can use:

npm run test-config

Installation

Setup

To use the automated deployment scripts you will need to have aws-cli installed and configured.

Deployment scripts are pre-configured to use some default values for the Lambda configuration. I you want to change any of those just use:

npm config set aws-lambda-image:profile default
npm config set aws-lambda-image:region eu-west-1
npm config set aws-lambda-image:memory 1280
npm config set aws-lambda-image:timeout 5
npm config set aws-lambda-image:name lambda-function-name
npm config set aws-lambda-image:role lambda-execution-role

Note that aws-lambda-image:name and aws-lambda-image:role are optional. If you want to change lambda function name or execution role, type above commands before deploy.

And make sure AWS Lambda Layer has installed in your account/region. See LAYERS for instructions.

Deployment

Command below will deploy the Lambda function on AWS, together with setting up roles and policies.

npm run deploy

Notice: Because there are some limitations in Claudia.js support for policies, which could lead to issues with Access Denied when processing images from one bucket and saving them to another, we have decided to introduce support for custom policies.

Custom policies

Policies which should be installed together with our Lambda function are stored in policies/ directory. We keep there policy that grants access to all buckets, which is preventing possible errors with Access Denied described above. If you have any security-related concerns, feel free to change the:

"Resource": [
    "*"
]

in the policies/s3-bucket-full-access.json to something more restrictive, like:

"Resource": [
    "arn:aws:s3:::destination-bucket-name/*"
]

Just keep in mind, that you need to make those changes before you do the deployment.

Adding S3 event handlers

To complete installation process you will need to take one more action. It will allow you to install S3 Bucket event handler, which will send information about all uploaded images directly to your Lambda function.

npm run add-s3-handler --s3_bucket="your-bucket-name" --s3_prefix="directory/" --s3_suffix=".jpg"

You are able to install multiple handlers per Bucket. So, to add handler for PNG files you just need to re-run above command with different suffix, ie:

npm run add-s3-handler --s3_bucket="your-bucket-name" --s3_prefix="directory/" --s3_suffix=".png"

Adding SNS message handlers

As an addition, you can also setup and SNS message handler in case you would like to process S3 events over an SNS topic.

npm run add-sns-handler --sns_topic="arn:of:SNS:topic"

Updating

To update Lambda with you latest code just use command below. Script will build new package and automatically publish it on AWS.

npm run update

More

For more scripts look into package.json.

Complete / Failed hooks

You can handle resize/reduce/backup process on success/error result on index.js. ImageProcessor::run will return Promise object, run your original code:

processor.run(config)
.then(function(proceedImages)) {

    // Success case:
    // proceedImages is list of ImageData instance on you configuration

    /* your code here */

    // notify lambda
    context.succeed("OK, numbers of " + proceedImages.length + " images has proceeded.");
})
.catch(function(messages) {

    // Failed case:
    // messages is list of string on error messages

    /* your code here */

    // notify lambda
    context.fail("Woops, image process failed: " + messages);
});

Image resize

  • ImageMagick (installed on AWS Lambda)

Image reduce

License

MIT License.

Author

Yoshiaki Sugimoto

Image credits

Thanks for testing fixture images:

aws-lambda-image's People

Contributors

akfreas avatar aleemb avatar brauznitz-ps avatar dependabot[bot] avatar dougblackjr avatar drselump14 avatar ericlathrop avatar farfenugen avatar giovanigenerali avatar javdl avatar jmzavala avatar jomagam avatar kdybicz avatar krosti avatar larkinscott avatar lion-man44 avatar muratcorlu avatar odedniv avatar rodrigoalviani avatar toooni avatar tpai avatar ysugimoto avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aws-lambda-image's Issues

Utilising lambda outside of S3 Events

Hello again,

I've been thinking about my specific usage needs and am considering calling this function from SNS instead of via the S3 Event.

Can you see any downside to doing this, or anything significant that I would need to change for this to work (other than it getting the S3 object details from the event.Records)?

If anyone has already successfully done this, it'd be cool to hear how.

windows compatibility

pls remove "make" and use package.json / node.js-scripts, i think you can bundle zip-file with node really easy

Fails on latest aws-sdk 2.3.6

When uploading this to AWS Lambda and running a 'Test' the following stack trace appears:

{
  "errorMessage": "Cannot find module 'jmespath'",
  "errorType": "Error",
  "stackTrace": [
    "Function.Module._load (module.js:276:25)",
    "Module.require (module.js:353:17)",
    "require (internal/module.js:12:17)",
    "Object.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:5:16)",
    "Module._compile (module.js:409:26)",
    "Object.Module._extensions..js (module.js:416:10)",
    "Module.load (module.js:343:32)",
    "Function.Module._load (module.js:300:12)",
    "Module.require (module.js:353:17)"
  ]
}

User-Defined Metadata

It is possible to conserve User-Defined Metadata file information in new generated images

metadata like "x-amz-meta-"

slow performance

Hi, I am seeing quite alow performance in resizing pictures, is it normal?

Resizing FullHD image to 2 sizes takes almost 7 seconds. Is this a normal behaviour? Thank you.

image

Assertions in tests are not working

There is a problem while using t.is(...) and other t.'s in callback methods, in this case inside gm(destPath).size((err, out) => { ... });. There is a word or two about something similar in the AVA - common-pitfalls documentation, but I was unable to force pify() to work together with gm.

Here you have an example of a test that should fail, but it's passing:

test("Resize JPEG with cjpeg", async t => {
    const fixture = await fsP.readFile(`${__dirname}/fixture/fixture.jpg`);
    const destPath = `${__dirname}/fixture/fixture_resized_1.jpg`;
    const resizer = new ImageResizer({size: 200});
    const image   = new ImageData("fixture/fixture.jpg", "fixture", fixture);

    const resized = await resizer.exec(image);
    await fsP.writeFile(destPath, resized.data);
    gm(destPath).size((err, out) => {
        if ( err ) {
            t.fail(err);
        } else {
            t.is(out.width, 12345); // invalid image width
        }
        fs.unlinkSync(destPath);
    });
});

You can replace this line with t.is(out.width, 12345); to verify this in local env.

Using aws-lambda-image with different regions

Hello there I'm struggling to use this function since my bucket is on the eu-west-1 region (I'm from Portugal so...).

Tried to update configs for aws-sdk using aws.config.update({region: 'eu-west-1'}); on the S3.js file just after your aws require but with had no luck whatsoever... lambda still gives me { "errorMessage": "Woops, image process failed: S3 getObject failed: PermanentRedirect: The bucket you are attempting to access must be addressed using the specified endpoint. Please send all future requests to this endpoint." }

Any help you could get me is extremely appreciated!

Edit: I also tried getting a region: "eu-west-1" onto your const client var definition... again with no luck...

Replace original files

Hi,

Just wondering if it's possible to overwrite the original images rather than putting them into a different bucket?

Any instructions would be hugely appreciated!

GIF support

Can you make it to work with gif images too? :)

make configtest results in error

cp config.json.sample config.json && make configtest
make configtest

==========================================
  AWS-Lambda-Image Configuration Checker
==========================================

Checking configuration format...  Error!
Unexpected token {

/home/ubuntu/workspace/aws-lambda-image/bin/configtest:42
var bucket = ( config.bucket ) ? config.bucket : "";
                     ^
TypeError: Cannot read property 'bucket' of undefined

"errorMessage": "Process exited before completing request"

I've followed the instructions, have uploaded the .zip file to AWS lambda, and have tried to upload a sample image on S3 to see if it got resized. That did not work. No images were resized. So I then tried to test the lambda function via the "Lambda Management" console, which resulted in this error:

"errorMessage": "Process exited before completing request"

When setting up the lambda function are there any specific settings that I have to choose or are required for your code to work? I've chosen the correct version of node.

Here is my config.json file:

{ "bucket": "example.guide", "reduce": { "directory": "images" }, "resizes": [ { "size": 300, "directory": "med" } ] }

So basically, if I upload an image to example.guide/images/ I'd expect your script to resize the image and place it in example.guide/images/med or example.guide/med. It does neither.

Proper use of Promises

Hi!
Great, great work on this! This is the only lib for creating thumbnails on lambda I encountered so far!
It really just works, I am going to build a whole new system on top of it.

But I noticed some small issues in your code. For example usage of promises.
You see, promises are monads so they are able to flatten themselves when returned in then() callback.
So, instead of:

   resizer.exec(imageData)
       .then(function(resizedImage) {
            var reducer = new ImageReducer(option);

            reducer.exec(resizedImage)
            .then(function(reducedImage) {
                resolve(reducedImage);
            })
            .catch(function(message) {
                reject(message);
            });
        })
        .catch(function(message) {
            reject(message);
        });

you can write just:

resizer.exec(imageData)
       .then(function(resizedImage) {
            var reducer = new ImageReducer(option);
            return reducer.exec(resizedImage)
        })
        .then(function(reducedImage) {
           resolve(reducedImage);
        })
        .catch(function(message) {
            reject(message);
        });

So you get rid of nesting and everything still works.
You don't even need to wrap these calls into another Promise since all of these methods return Promises too.

Unable to resize images above 50kb

I get the following error when I resize image having size greater than 50kb(Around 500KB) I get the following error on Lambda(Although it runs fine on my local) :-

{
  "errorMessage": "Woops, image process failed: ImageMagick errError: Command failed: convert: Expected 8192 bytes; found 6062 bytes `-' @ warning/png.c/MagickPNGWarningHandler/1777.\nconvert: Read Exception `-' @ error/png.c/MagickPNGErrorHandler/1751.\nconvert: corrupt image `-' @ error/png.c/ReadPNGImage/3789.\nconvert: no images defined `png:-' @ error/convert.c/ConvertImageCommand/3046.\n"
}

Too much confused I am.

Extensible configuration

Am looking for an AWS Lambda solution which would both crop (#38) and resize images, optionally outputting them in an image format different from the input (#45) server-side. Here's a suggestion for an extensible, "command-based" configuration. I have seen similar configuration styles used in internal/proprietary projects.

Note: this might be a crazy idea -- perhaps it's best to just pass all arguments directly to imagemagick?

What do you think?

Benefits

  • Can be used to chain/pipe the result from one command (resize, crop, grayscale etcetera) to one or several other commands (reduce, save etcetera).
  • Commands are easy add to the configuration, using objects { "command": "resize", "size": "..." }.
  • Subcommands are easy to configure:
{
    "command": "grayscale",
    "commands": [
        {
            "command": "flip",
            "direction": "horizontal" 
        },
        {
            "command": "...",
            "commands": [
                ...
            ]
        } 
    ] 
}
  • New commands can easily be added, such as conversion to grayscale etcetera.
  • Supports per-command-scoped options.
  • Commands and options can (should?) be mapped to that of imagemagick, even though it can be rather complex. A shortcut would be to just pass all arguments directly to imagemagick. Might there be a project already doing similar command/option mapping? Needs research.
  • Future versions could define "composite commands" (custom commands, macros) which can then be reused:
{
    "name": "my-crazy-image-transformation",
    "commands": [
        {
            "command": "rotate",
            ...
        },
        {
            "command": "draw-text",
            ...
        }
    ]
}

Drawbacks

  • It might be tricky to implement a flexible chaining/piping solution.
  • It might be completely overkill for this project.

Example

A simple example showing the input image being saved to two versions, after some commands/transformations.

  • The first version has been cropped to 50% of the width and 25% of the height, converted to image/jpeg before file size is reduced/optimized and then saved to the first-cropped-then-reduced directory.
  • The second version was cropped in the same step (thus saving computation time), but then also resized to max 640 by 480 pixels before being saved to the directory first-cropped-then-resized as an image/gif image.
{
    "bucket": "your-destination-bucket",
    "commands": [
        {
            "command": "crop",
            "gravity": "southeast",
            "geometry": "50x25%",
            "commands": [
                {
                    "command": "resize",
                    "geometry": "640x480",
                    "commands": [
                        {
                            "command": "convert",
                            "type": "image/gif",
                            "commands": [
                                {
                                    "command": "save",
                                    "directory": "first-cropped-then-resized"
                                }
                            ]
                        }
                    ]
                },
                {
                    "command": "convert",
                    "type": "image/jpeg",
                    "commands": [
                        {
                            "command": "reduce",
                            "quality": "90",
                            "commands": [
                                {
                                    "command": "save",
                                    "directory": "first-cropped-then-reduced"
                                }
                            ]
                        }
                    ]
                }
            ]
        }
    ]
}

File too big???

I am finding that jpg file sizes greater than 5GB fail. The error I am getting is this:

Error: read ECONNRESET at errnoException (net.js:905:11) at Pipe.onread (net.js:559:19)

I have memory set to 1025 and timeout is 1:30. The job is only using:

Duration: 5531.38 ms Billed, Duration: 5600 ms Memory Size: 1024 MB, Max Memory Used: 160 MB

Does anyone know if there is a 5GB limit, and if so, is there a way to get around this?

Support file type conversion

Including an option to convert between file formats, e.g. to normalise jpeg and png to jpg would be useful in cases where there should be a canonical image, e.g. for user profile pictures.

Error when executing make lambda

make lambda
Factory package files...
cp: build/index.js: No such file or directory
make: *** [lambda] Error 1

Issue? or am i doing something wrong?

Lambda fails to run

I've set the lambda to point to a specific bucket and resize the image 3 times.
Everytime I upload an image to the monitored folder, I get the following error message:

Unable to import module 'index': Error
at Function.Module._resolveFilename (module.js:325:15)
at Function.Module._load (module.js:276:25)
at Module.require (module.js:353:17)
at require (internal/module.js:12:17)
at Object. (/var/task/libs/ImageResizer.js:3:19)
at Module._compile (module.js:409:26)
at Object.Module._extensions..js (module.js:416:10)
at Module.load (module.js:343:32)
at Function.Module._load (module.js:300:12)
at Module.require (module.js:353:17)

My configuration is as follows (with bucket name replaced):

{
"bucket": "bucketnamehere",
"resizes": [
{
"size": 100,
"directory": "p/w_100"
},
{
"size": 192,
"directory": "p/w_192"
},
{
"size": 730,
"directory": "p/w_730"
}
]
}

Also asked a co-worker to set up NodeJS from scratch and regenerate the lambda zip file, but gives the same error message.

infinite loop

If you don't want to specify a different target bucket, this script ends up in an infinite loop. I found a solution where a DynamoDB was used to avoid this case. But I thought that maybe adding a meta information like "x-amz-meta-optimized" with value 1 or something might be a solution. The Script then skips files with this meta tag.

The downside of this:

  • Script runs twice (Is also happening when using a DB)
  • Images do have an additional Meta-Tag

What do you think about this?

Support crop functionality

Hello there guys,
Great work you've done here but I want to ask if there is a crop functionality in the project or should I do it and make a pull request for it.

Permissions

I find that after processing, my images are no longer publicly viewable. I solved this by adding the following before line 51 of libs/s3.js:
ACL: 'public-read',

It would be great to add a line for configuring upload permissions to config.json so that no modification of other files is necessary.

Ability to use defined ImageMagick geometry within config.json

I'm attempting to set a max width/height and retain aspect ratio.

"size": "144x144",

With the size defined it only uses calculated width, with height exceeding the defined size parameter.

Proposed config.json example;

"size": "100x200"          # width = 100, height = 200, retains aspect ratio

"size": "300x"             # width = 300, height = proportional

"size": "x300"             # width = proportional, height = 300

Resize image when larger than specified size

Hello
I'm trying to resize images only if their original width is larger than 1200px and to do that I'm using this option:

"resizes": [
    {
      "size": "1200x>",
      "directory": "./optimized/hero",
      "quality": 90,
      "format": "jpg"
    }
  ]

which (as far as I understood) should resize the image only if it's larger than 1200px. However I see that also images smaller are enlarged to fit that width. Is my configuration wrong?
One more question: what happens to images which do not match the size condition? Will they anyway be transferred in the destination folder and only compressed (90%)?

PNG images not working

Hello
I can't make the function process uploaded PNG imges: it works for jpg but for png files I've got the following error when I run the test:

START RequestId: 8d7fd305-a8ff-11e6-b6bb-ed7d87bbb1aa Version: $LATEST
2016-11-12T17:43:50.679Z	8d7fd305-a8ff-11e6-b6bb-ed7d87bbb1aa	Error: write EPIPE
    at Object.exports._errnoException (util.js:870:11)
    at exports._exceptionWithHostPort (util.js:893:20)
    at WriteWrap.afterWrite (net.js:763:14)
END RequestId: 8d7fd305-a8ff-11e6-b6bb-ed7d87bbb1aa
REPORT RequestId: 8d7fd305-a8ff-11e6-b6bb-ed7d87bbb1aa	Duration: 8153.01 ms	Billed Duration: 8200 ms 	Memory Size: 256 MB	Max Memory Used: 137 MB	
Process exited before completing request

Here's my configuration

  "bucket": "my-bucket",
  "reduce": {
      "directory": "./reduced",
      "quality": 90
  },
  "resizes": [
    {
      "size": 300,
      "directory": "./resized/small"
    },
    {
      "size": "600x600^",
      "gravity": "Center",
      "crop": "600x600",
      "directory": "./resized/cropped-to-square"
    },
    {
      "size": 600,
      "directory": "./resized/600-jpeg",
      "format": "jpg",
      "background": "white"
    },
    {
      "size": 900,
      "directory": "./resized/large",
      "quality": 90
    }
  ]
}

Any hint?

Async Issue

How do you handle the case where a user requests an image that hasn't completed processing yet?

NOT ABLE TO COMPRESS .PNG IMAGES

first time when i run the code with the event having key "any.png" it shows me error

2016-05-11T06:11:43.882Z 3bcedd16-173f-11e6-9ba0-156ea9cbe5f2 Error: read ECONNRESET
at exports._errnoException (util.js:870:11)
at Pipe.onread (net.js:544:26)

if i run the same code for the second time it give me error

2016-05-11T06:12:20.081Z 511c8244-173f-11e6-b69d-77060bb8b8e4 {"errorMessage":"Woops, image process failed: ImageMagick errError: Command failed: convert: Expected 8192 bytes; found 6194 bytes -' @ warning/png.c/MagickPNGWarningHandler/1777. convert: Read Exception-' @ error/png.c/MagickPNGErrorHandler/1751.
convert: corrupt image -' @ error/png.c/ReadPNGImage/3789. convert: no images definedpng:-' @ error/convert.c/ConvertImageCommand/3046.
"}

please look into the matter.
and tell me how can i remove that error?

Original Image's width less then resize width

I don't guarantee original image's width large then resize width.

  • original image's width : 800 < resize image width : 1000
  • "errorMessage": "Woops, image process failed: ImageMagick errError: Command failed: "

So, I wanna copy original image to thumbnail bucket, when origin image's width less then resize image width.

How can I do?

Format Conversion Not Working

I am using the version with the latest commit (adding the format conversion) so I as I understand it, using the below configuration, if I upload a .png I should get a .jpg in the resized/600-jpeg folder.

However what I am getting is a .png with the same name and with 0 bytes. I have tried various original sizes but always get this result.

The other resizes (which do not contain the format key) are working properly.

Can anyone suggest what might be wrong?

{
"reduce": {
"directory": "reduced",
"quality": 90
},
"resizes": [
{
"size": 300,
"directory": "resized/small"
},
{
"size": "600x600^",
"gravity": "Center",
"crop": "600x600",
"directory": "resized/cropped-to-square"
},
{
"size": 600,
"directory": "resized/600-jpeg",
"format": "jpg",
"background": "white"
},
{
"size": 900,
"directory": "resized/large",
"quality": 90
}
]
}

Stream yields empty buffer

Hi,

I'm having trouble getting this to work. When I set it up on AWS, the function timed out, so I ran the tests locally and the final 5 tests are failing with the error Stream yields empty buffer. I assume this is related to why the function times out on AWS, but I don't know why?

Preserve Relative Path

If an image is uploaded in foo/bar/image.png then it would be best if the resized images were added to /resized/small/foo/bar/image.png instead of the current behaviour of putting it in resized/small/image.png. This would allow apps that use custom paths to work better.

Auto orientation bug

Everything works great but if I convert a portrait image I get it back as landscape, probably use auto-orient parameter for ImageMagick?

What is the purpose of the 'bucket' value in the Config?

Hello!

Thanks for all the hard work on this, hoping it'll save me some time!

I'm playing around with this lambda to see if it'll suit my needs for processing image sizes for multiple images.

I'm going to be doing this across multiple buckets using the same Lambda so don't really want to specify a bucket in the config.

I noticed that it's getting the s3Object from the event request, the bucket name is stored in here (s3Object.bucket.name) - so I'm just wondering why we need to specify the bucket in the config rather than just getting it from the event?

Currently I'm using a fork and just doing at the top of the hanlder function config.set('bucket', s3Object.bucket.name)

Thanks!

ImageMagick fails on Lambda but not locally

When I test locally it successfully pulls from bucket A and puts my resized images in bucket B. I have looked all through the stack and my local seems to match up to Lambda but when I run it on lambda (same example.json file) it fails out with this error:

{
  "errorMessage": "Woops, image process failed: ImageMagick errError: Command failed: convert: Expected 8192 bytes; found 5848 bytes `-' @ warning/png.c/MagickPNGWarningHandler/1777.\nconvert: Read Exception `-' @ error/png.c/MagickPNGErrorHandler/1751.\nconvert: corrupt image `-' @ error/png.c/ReadPNGImage/3789.\nconvert: no images defined `png:-' @ error/convert.c/ConvertImageCommand/3046.\n"
}

I checked the byte size locally, on s3, and on Lambda and they are all the same even though ImageMagick sends a corrupt image error. Any ideas?

config.json:

{
  "bucket": "example-bucket",
  "resizes": [
    {
      "width":  110,
      "height": 145,
      "directory": "small"
    },
    {
      "width":  151,
      "height": 200,
      "directory": "medium"
    },
    {
      "width":  450,
      "height": 350,
      "directory": "large"
    }
  ]
}

reduce and resize

I need some clarification about the config file.

Given this configuration:

{
  "acl": "public-read",
  "reduce": {
      "quality": 60,
      "suffix": "_reduced"
  },
  "resizes": [
    {
      "size": 200,
      "suffix": "_resized"
    }
  ]
}

It is intended that for an image MyImage.jpg the lamda reduce and resize it so that I should obtain two new images in the same folder.

  • MyImage_reduced.jpg
  • MyImage_resized.jpg

Is this correct?
Because right now I get only the reduced image.

thx

gif images not working

We are using the current version 6bac5b5

png and jpg images are working as expected.

We are using 4 different sizes.

However, gif images are being saved as 0 byte files at all 4 sizes.

Any ideas?

How can i set image quality (mozjpeg)?

If have tried to add "-quality 80" to the args array in libs/optimizers/Mozjpeg.js, but then i get the following errors: (cloudwatch log)

2016-07-19T08:37:27.282Z da078f59-4d8b-11e6-bf3c-edbd1dbf43c6 Error: write EPIPE at Object.exports._errnoException (util.js:870:11) at exports._exceptionWithHostPort (util.js:893:20) at WriteWrap.afterWrite (net.js:763:14)

"OK, 1 images were processed." but not saving images

Configured the function with proper permissions and trigger (Event type: ObjectCreated). On cloudwatch it reflects that images are processed perfectly but inside the bucket there is no trace of the processed files.

verbose logging option

In AWS Lambda, It difficult to see the proccesing status. So I add verbose option, and put some logging point in this program.

EPIPE write errors

Keep getting an EPIPE error, my files are pretty large, approx 2-3MB:

2016-05-14T05:17:33.860Z 23ca5c10-1993-11e6-96f0-4980e923c2ed Error: write EPIPE at Object.exports._errnoException (util.js:870:11) at exports._exceptionWithHostPort (util.js:893:20) at WriteWrap.afterWrite (net.js:763:14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.