Coder Social home page Coder Social logo

multer-s3's People

Contributors

anderberge avatar briandonahue avatar franciscotfmc avatar igneosaur avatar jacobtomlinson avatar jblz avatar linkgoron avatar linusu avatar lukechilds avatar rdpacheco avatar reckonyd avatar satyajeetcmm avatar subinsebastien avatar takaitra avatar thebergamo avatar wgminer avatar woss avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

multer-s3's Issues

client side encryption for upload ?

Hi,

Thanks for your code. Very inspiring for my project.

I would like to send quite big files to S3, so I am using upload with multipart but as you know aws s3 js sdk does not manage client side encryption 😒

The use cas is a browser upload a file to my server unencrypted, and while I received streams of this upload, I send these streams to s3 with s3.upload (sample here: https://devcenter.heroku.com/articles/s3-upload-node) That way, I don't have to wait for the whole download to finish before uploading to S3. That way, I can use client side encryption to my own server.

Do you think your approach/code could work for this use case (multipart upload) ? For instance, If I overide the s3.upload method to add client side encryption params.

I am worrying that s3 will not be able to recompose the file with all the differents parts if they have been encrypted on their own side.

I know it is not completely clear. But If someone understand my need, your helpful advices are welcome πŸ˜‰

Generating Hash before Upload to S3

Just being curious, is it anyhow possible to hook into multer-s3 before upload and access the locally cached file?

What I like to to:

  1. Hook into before upload
  2. Calculate hash
  3. Calculate phash
  4. Check with db if this file was already uploaded
  5. If yes, stop uploading to s3 and return file reference instead

I really appreciate your help to get a quick dive into multer and multer-s3!

Update file-type version

Hi there,

The dependency file-type is set to ^3.3.0 but they are currently at 7.2.0

I tested in my fork and it all works as expected.

Thanks

Delete files

Is there a support to delete files?
If yes can you please give example

Avoid broken files uploaded

Have a way to prevent a broken file to be uploaded?

I see in my API some files uploaded just by the half, the other part of the image is just a gray block. Maybe some bytes are lost I guess.

How I can avoid this behavior? In other cases we use a MD5, but when we're working with streams, maybe this is a complicated think.

Any ideas?

Allow dynamic s3 instance/config?

I need to to be able to change s3 credentials depending on request. If you make s3 option a function then a new instance can be created and returned if required. Does it work?

File Name

I have the following code to save Files in S3. Following code I have

var express = require('express'),
    aws = require('aws-sdk'),
    bodyParser = require('body-parser'),
    multer = require('multer'),
    multerS3 = require('multer-s3');

aws.config.update({
    secretAccessKey: 'XXXXXXXXXX',
    accessKeyId: 'XXXXXXXXXX'
});

var app = express(),
    s3 = new aws.S3();

app.use(bodyParser.json());

var upload = multer({
    storage: multerS3({
        s3: s3,
        bucket: 'XXXXXXX',
        key: function (req, file, cb) {
            console.log(file);
            cb(null, Date.now()+file.originalname); 
        }
    })
});
app.post('/upload', upload.any(), function (req, res, next) {
    // Here I want to get the File Name
     res.send("Uploaded!");
 });

I want to get the FileName/KEY at the method level once the Upload is successful. I would be saving the files with Unique IDs. Once Saved, I will be saving the KEY in a Relational DB to keep a track of the Files. How can I get the file name [Date.now()+file.originalname]

var s3 = new aws.S3({ /* ... */ })

where i put my my s3 access key and

where i put my my s3 access key and

**"accessKeyId": "xxxxxxxxxxxxxxxx",
"secretAccessKey": **"xxxxxxxxxxxxxx",

i get this error
AWS Missing credentials when i try send something to my S3 Bucket

uploaded file

why file is not opening in browers its only download from amazon

Content after upload

Is just one idea, but what you think about send the buffer across the middlewares?

maybe something link: req.file.buffer like we have in MemoryStorage.

In my case this is useful, but I don't know if this are useful for keep in the public api of multer-s3

Can't set Cache-Control header

As far as I can tell there is no way to set a Cache-Control header from the MulterS3 parameters. I really really don't wanna make a separate request to S3 just to update this setting πŸ˜†

Should be pretty easy to bake this into MulterS3, would you accept a PR for this?

I'm thinking an optional parameter, something like this:

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'some-bucket',
    cacheControl: 'max-age=31536000',
    acl: 'public-read',
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
});

It is possible make some kind of function for each uploaded file when posting multiple files at once?

I have following for upload files into S3

const upload = extension => multer({
  storage: multerS3({
    s3,
    bucket,
    acl,
    metadata(req, file, cb) {
      cb(null, { fieldName: file.fieldname });
    },
    ...
  }),
});

router.post(
  '/upload',
  upload('jpg').array('files', 5),
  (req, res) => {
    console.log(req.files);
    ...

It is possible to make some kind of function for each single uploading file (to write some information about file into DB before multer begin uploading next file from array('files', 5) )
(for example, if writing data into DB failed multer need to abort uploading rest of files)
or for this I need upload each file separately?

Mark file public

I don't see an option to pass ACL param when storing a file, is this missing ? or it can be done somehow ?

Overwriting

There should be a way to disable file overwriting. Maybe something like:

var upload = multer({
  storage: s3({
    bucket: 'some-bucket',
    secretAccessKey: 'some secret',
    accessKeyId: 'some key',
    region: 'us-east-1',
    overwrite: false,
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

S3 Object Upload Cancellation

Is there any way to cancel the file upload? Say if the request is unauthorized and we don't want the file from the request to be uploaded to S3.

File Extension

Hi I am using your library with some success. But I was curious what your recommended way to add a file extension is? I tried the key attribute that is passed into the options but It seems the docs are outdated. Let me know how you normally handle this.

Thanks,

Jordy

It could be ideal to add ACL for the uploads? :)

It would be ideal, if the users are able to pass in ACL modes, during uploads :). Something like this:

  var upload = that.s3.upload({
        Bucket: that.options.bucket,
        Key: filePath,
        ACL: 'public-read', <-- // (passed via main func)
        ContentType: contentType,
        Body: (_stream || file.stream)
      })

ref: http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/S3.html#listObjects-property

Something, we can pass it in a func. I can submit a pull request, with the change, if ok?

Cheers,
Jeremy

rendering via an S3 URL

i'm trying render an image via an S3 url in the following form:

"'https://s3.amazonaws.com/bucket-name/'+ req.file.key"

where req.file.key is my storage's dirname attribute; however, i keep getting the following error when i navigate to said URL: "This XML file does not appear to have any style information associated with it. The document tree is shown below." i'm not sure if this is an issue with multer itself or this module for storing to my s3 bucket. i do indeed have my storage acl set to 'public-read'.

if this helps, logging 'req.file' to the console returns an object with all the proper metadata, where the size is "undefined." if you could let me know what you think Duncan, that'd be great!

npm is not installing correct version.

Duncan,
FYI - I noticed that npm is not installing the latest version of your package. The version numbers match but the index.js from npm does not match the index.js from this git repo. Using the version from this repo works great.

Steve

Drop the dependency on s3fs

I think it would be a good improvement to drop the dependency on s3fs and instead use aws-sdk directly. This would save the extra dependencies and give us access to upstream updates quicker.

I would be happy to provide the code if no one beats me to it.

On a related note, it seems like uploading to s3 from multer is very popular. How would you feel about getting some help on maintaining this module? I would personally be happy to hop on as a maintainer. Since it's such a small module I think the most work would be to cut new releases when dependencies (probably only aws-sdk) updates, and to answer questions from users.

I'm linusu on npm πŸ˜„

Specify key on endpoint

Hi! Thanks for this useful middleware.
I was looking for the possibility to specify a prefix in the endpoint like:

app.post('/', upload.array('images', 5, 'user_images_prefix'), req, res) => {}

req.body is empty in key function

When sending a payload with a file and additional text fields I would expect req.body to contain these text field but it just returns an empty object. Is this intentional?

My use-case is I want to construct the file path based on these additional fields.

File param not recognized.

Hello,

I've implemented setup for multer-s3 exactly as you have demonstrated. It looks like file uploading to s3 is working! However, the file object passed to the metadata and key functions does not have a key or location, even though it is successfully uploading.

Ultimately, my goal is to upload the file, but also return the filepath as a json response back down to my client. How would you recommend doing this? Why does the file object seem to be empty, and what are the metadata and key functions even meant to do?

Thanks,
Adam

Default content type wrong

All files show up as application/octet-stream in S3.

This should be set to the actual content type of the file.

Files uploaded using package completely stored in system memory?

Hey there, was looking for something like this that would stream a file uploaded through API request and just wanted to validate the functionality of this helper package. I was curious if the file that is being uploaded through the request is being stored in local system memory completely before uploading to S3, or is this streaming the file to S3 as the API is receiving the file in parts? The reason I am curious and looking for a explicit answer is I am wondering if Ill be able to use this method of uploading to S3 if the files being uploaded are a bit larger than those of simple photos, such as small video clips. If it will store the full video clip before streaming to S3, then I fear that multiple requests to upload files would result in over memory usage faster than if each request was only storing a small amount of the file that needs to be uploaded to S3 at a time.

How to get req.body parameters in multer s3

storage: multerS3({
s3: s3,
bucket: 'bucket',
metadata: function(req, file, cb) {
cb(null, {
fieldName: file.fieldname
});
},
key: function(req, file, cb) {
console.log('req.body', req.params.id); //not getting
console.log('req.body', req.body);
//Not getting param here that passed in api
//Need to save file on s3 at specific location i.e /foldername/filename
//But the folder name not getting from API
cb(null, file.originalname)
}
}) }).array('userFile', 1);


Above is multer s3 code


app.post('/saveData', function(req, res, next) {
upload(req, res, function(err) {
console.log('err' + err);
var status = '';
var result = '';
var link = '';
if (err) {
status = false;
} else {
status = true;
}
result = {
"status": status,
"link": link
}
});
res.send(result); });


Above code where calling multer upload function. I am parsing data in API (from Angular2, set -Content-Type": "multipart/form-data) as formdata

`let formData: FormData = new FormData();

formData.append('userFile', file);

formData.append('fileName', fileName);`

I require the req.body data from API like folder name and other, so I can put the file to specific place on S3. Need req.body data inside the key: function(req, file, cb) { of multerS3.

Get response from S3

I used it this way.

var multer = require('multer');
var s3 = require('multer-s3');

var upload = multer({
storage: s3({
dirname: 'uploads/photos',
bucket: 'testing-bucket-yousa',
secretAccessKey: 'abcd',
accessKeyId: 'zz',
region: 'us-east-1'
})
})

app.post('/admin/uploadImagetoS3', upload.array('file'), function (req, res, next) {
console.log(res)
})

what does upload.array('file') do?

Please clarify me why is dirname required, I don’t want to store the data on my local machine and how to capture the response from s3, if it was successful or not?

Target folder

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'web-test-2016',
    acl: 'public-read',
    key: function (req, file, cb) {
      cb(null, makeid() + '_' + file.originalname)
      //Date.now().toString()
    }
  })
})

In this case, how can I set the destination folder name? Thanks.

Link after upload

Have a way to referencing the link on the req object after the complete upload to s3?

Handling Large Files

I am not able to upload large files using multer-s3. It is not giving me any error as well. It just doesn't upload the file, doesn't even enters the callback and gets timeout. Any way to handle uploading large files to s3 Bucket?

I am using it like this:

var uploadSingle = upload.single('uploadFile');

router.post('/uploadVideo',function(req,res,next){	
	uploadSingle(req,res,function(err){
                // doesn't come here if the file is large
	        if(err){
	            //Error Response , Error while uploading Module PDF;
	        }
	        else{
	            //handling file upload
	           // success response
	        }
	});
}

It doesnt enters the callback of uploadSingle.

Drop requirements for aws-sdk data and dirname

As aws-sdk auto-loads configurations and credentials from ~/.aws/ directory and the recommended way of loading credentials is by letting the aws-sdk do so, I suggest we drop the requirements for the region, secretAccessKey and accessKeyId.

The dirname option also should not be required, as it is not mandatory to upload the file to a directory (though useful for most cases).

What do you think?

  if (!opts.secretAccessKey) throw new Error('secretAccessKey is required')
  if (!opts.accessKeyId) throw new Error('accessKeyId is required')
  if (!opts.region) throw new Error('region is required')
  if (!opts.dirname) throw new Error('dirname is required')

Not at all Working

Hi,

The sample code is not at all working. I've changed with my credentials and using ACL with public-read. Please check the issue.

var upload = multer({
	storage : multerS3({
		s3 : s3,
		bucket : 'mybucket',
		ACL : 'public-read',
		metadata : function(req, file, cb) {
			cb(null, {
				fieldName : file.fieldname
			});
		},
		limits : {
			fileSize : 10 * 1024 * 1024
		},
		key : function(req, file, cb) {
			
			cb(null, Date.now().toString())
		}
	})
}).single('photo');

app.post('/upload', function(req, res, next) {
	upload(req, res, function(err) {
		res.send('Successfully uploaded files!')
	});
})

Upload progress event

Files are being uploaded successfully to S3 but I am unable to show any progress bar events in browser.

What must I do to get progress events ?

acl: 'public-read' gives access denied error

Without acl: 'public-read', files get uploaded.when I click the file link it says access denied.

So I added acl: 'public-read', . It gives error

AccessDenied: Access Denied

and files didn't get uploaded.

var upload = this.s3.upload(params) TypeError: this.s3.upload is not a function

aws.config.update({
    secretAccessKey: 'key',
    accessKeyId: 'secret',
});

var s3 = new aws.S3()

var upload = multer({
  storage: multerS3({
    s3: s3,
    bucket: 'pdsfiles',
    metadata: function (req, file, cb) {
      cb(null, {fieldName: file.fieldname});
    },
    key: function (req, file, cb) {
      cb(null, Date.now().toString())
    }
  })
})

This is what the error looks like:

TypeError: this.s3.upload is not a function
at S3Storage. (/home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:172:26)
at /home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:58:10
at S3Storage.getContentType (/home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:8:5)
at /home/simran/Downloads/NODE-BACKEND/node_modules/multer-s3/index.js:55:13
at end (/home/simran/Downloads/NODE-BACKEND/node_modules/run-parallel/index.js:16:15)
at _combinedTickCallback (internal/process/next_tick.js:73:7)
at process._tickDomainCallback (internal/process/next_tick.js:128:9)

Call:

router.put('/updatePatternGrading/:pattern_number', upload.any(), function(req, res){
     console.log("request obj after manipulation", req.files);
     callAPI(req, res, fn.bind(apiObj, 'updatePatternGrading'));
 })

Why do I get this ?

create new directory

Hi is it possible to create a new directory in the bucket before uploading? via dirname?

Thanks.

i get this error

**Missing credentials in config

Error: connect ENETUNREACH 169.254.169.254:80
at Object.exports._errnoException (util.js:870:11)
at exports._exceptionWithHostPort (util.js:893:20)
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1061:14)**

`dirname` is not documented

If I npm install multer-s3, it downloads v1.4.1. However, the documentation is for versions of multer ahead of v1.4.1 where dirname is not longer a required option.

I suggest that while [email protected] is the default on npm we bring back all the documentation needed for using the module.

It took me a while to figure out what was going on.

Limit size

Hi,

It's possible to fix limit size to upload?

Thkx

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    πŸ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. πŸ“ŠπŸ“ˆπŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❀️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.