The fastest way to upload files from a Next.js app to S3.
Visit the Next S3 Upload website for full documentation, install instructions, and usage examples.
Upload files from your Next.js app to S3
Home Page: https://next-s3-upload.codingvalue.com/
The fastest way to upload files from a Next.js app to S3.
Visit the Next S3 Upload website for full documentation, install instructions, and usage examples.
Hello!
How to delete uploaded file?
Context:
AWS Serverless,
node v.14
NextJS 12
Similar to issue #32 I'm seeing a problem with false positive in the missingEnvs function in my environment, which is AWS Lambda (deployed by Serverless) using Node 14 and Next 12.
Next S3 Upload: Missing ENVs S3_UPLOAD_KEY, S3_UPLOAD_SECRET, S3_UPLOAD_REGION, S3_UPLOAD_BUCKET
When calling the function it is reporting that I am missing all environment variables needed to use S3.
I created the test function you recommended on issue #32:
// pages/api/test-missing-env.js
let missingEnvs = () => {
let keys = [
'S3_UPLOAD_KEY',
'S3_UPLOAD_SECRET',
'S3_UPLOAD_REGION',
'S3_UPLOAD_BUCKET',
];
return keys.filter(key => !process.env[key]);
};
export default function(req, res) {
res.status(200).json({ missing: missingEnvs() });
}
This returns an array of missing variables:
[
'S3_UPLOAD_KEY',
'S3_UPLOAD_SECRET',
'S3_UPLOAD_REGION',
'S3_UPLOAD_BUCKET',
]
Then I created a second test function using a different reference method on the environment variables:
// pages/api/test-again-missing-env.js
let missingEnvs = (): string[] => {
let keys = [];
if (!process.env.S3_UPLOAD_KEY) keys.push('S3_UPLOAD_KEY')
if (!process.env.S3_UPLOAD_SECRET) keys.push('S3_UPLOAD_SECRET')
if (!process.env.S3_UPLOAD_REGION) keys.push('S3_UPLOAD_REGION')
if (!process.env.S3_UPLOAD_BUCKET) keys.push('S3_UPLOAD_BUCKET')
return keys;
};
This returns an empty array:
[]
I also ran a test where I wrote out the values on a test endpoint using the second variable reference method and saw the actual values in my response, so I know they exist and are correct.
S3_UPLOAD_KEY: *****
S3_UPLOAD_SECRET: *****
S3_UPLOAD_REGION: *****
S3_UPLOAD_BUCKET: *****
The problem with the missingEnvs function and why it is not recognizing the values is that the function uses square brackets to reference the process.env values.
With Next.js, the environment variables are created at build time and are stored, but not as objects. You should only reference them with the method process.env.UPLOAD_KEY, not process.env[UPLOAD_KEY] or const {UPLOAD_KEY} = process.env.
Because in Next.js process.env is not actually an object, referencing values in an array construct or by dereferencing doesn't work (in all cases).
A simple change to your function to read as above using process.env.VARIABLE_NAME construct will solve my problem, and likely the problem reported on issue #32
See NOTE on https://nextjs.org/docs/basic-features/environment-variables for further explanation of the process.env variables in Next.js, especially the following note:
Note: In order to keep server-only secrets safe, Next.js replaces process.env.* with the correct values at build time. This means that process.env is not a standard JavaScript object, so you’re not able to use object destructuring. Environment variables must be referenced as e.g. process.env.PUBLISHABLE_KEY, not const { PUBLISHABLE_KEY } = process.env.
More of a feature request.
Would like to have this be able to be configurable
let key = next-s3-uploads/${uuidv4()}/${filename.replace(/\s/g, '-')}
;
Where next-s3-uploads could be passed-in or from process.env.S3_BASE_PATH
Something like that.
Thanks and it is working great.
I want to rename the creds from S3_UPLOAD_SECRET
etc to something more specific to our use case.
Also it might be useful to allow passing through some options to the underlying S3 upload call.
hi, this repo is amazing!! Really great job, it's so helpful for us... I wanted to ask if you could add the possibility to delete things from s3 aswell as upload?
Hi, first off thank you for putting this together. If I can get this to work this will be a huge time saver.
I went through the setup with no problems, and I've got my app deployed to production, and when I add an image file using the file upload element, I'm getting a 500 error and it's not uploading to S3:
GET https://pura-vida-admin-tools-ca.herokuapp.com/api/s3-upload?filename=sample-upload-image.jpg 500 (Internal Server Error)
VM25:1 Uncaught (in promise) SyntaxError: Unexpected token I in JSON at position 0
It's just a .jpg file. Any idea what the issue could be?
Is there a way to provide file validation on the server-side using this package such as file size, file type, and max files per upload? I see we can allow certain file types via the IAM user policy but I'd like to provide a useful error message response.
Similarly, I'd like to check if the user is logged in. In the custom path name docs there is this example:
// pages/api/s3-upload.js
import { APIRoute } from "next-s3-upload";
export default APIRoute.configure({
async key(req, filename) {
let user = await getUserFromAuthToken(req.headers.authorization);
return `users/${user.id}/${filename}`;
}
});
However, if user is not logged in the user.id would be undefined. I'd like to check that user exists and if not return an error response, but the api response isn't available here.
Is there a place to add these checks?
hey!
Is it possible for people to upload images via copy & paste instead of opening up the file system to provide the image? That would be SO useful!!
Hi I was looking into using this and noticed that you were passing the AccessKeyId, SecretAccessKey, and SessionToken to the frontend.
The method you're using AWS sts getFederationToken is documented here: https://docs.aws.amazon.com/STS/latest/APIReference/API_GetFederationToken.html
The docs say "this call is appropriate in contexts where those credentials can be safely stored, usually in a server-based application"
Can you please explain why you're using this method instead of getting a signed URL from s3?
Great library!
I would like to see this library support Min.io, as it has an S3-compatible API, it shouldn't be too much lift to support it.
Thank you!
I was looking at #6 because I wanted to include a users uuid in the filepath for images they upload. This would require an asynchronous call to the database. I am having trouble setting this up, can you point me in the right direction to achieving this?
Also thanks for making this package! :)
import { APIRoute } from 'next-s3-upload';
import { v4 as uuidv4 } from 'uuid';
import { currentUser } from './db'
export default async function (req, filename) {
const token = getCookie(req.headers.cookie, 'token'); <--- return the cookie named 'token'
const user = await currentUser(token); <--- I want to await this
return APIRoute.configure({
key() {
return `image-uploads/${user.ref.id}/${uuidv4()}-${filename.replace(/\s/g, '-')}`;
}
});
}
The console always returned the below error everytime I try to submit to s3. I need help please anyone
Unhandled Runtime Error
Error: Unsupported body payload object
Call Stack
ManagedUpload.self.fillQueue
node_modules\aws-sdk\lib\s3\managed_upload.js (92:0)
ManagedUpload.send
node_modules\aws-sdk\lib\s3\managed_upload.js (201:0)
eval
node_modules\aws-sdk\lib\util.js (839:0)
new Promise
ManagedUpload.promise
node_modules\aws-sdk\lib\util.js (831:0)
_callee$
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (1032:0)
tryCatch
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (124:0)
Generator.invoke [as _invoke]
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (338:0)
Generator.eval [as next]
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (176:0)
asyncGeneratorStep
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (8:0)
_next
node_modules\next-s3-upload\dist\next-s3-upload.esm.js (30:0)
Hi,
I would like to know if there's a support for example you have 2 types of upload that will happen:
1.) Upload of Image should go to /images directory.
2.) Upload of CSV files should to /csvfiles directory.
Is there a way to separate that using single configuration?
This seemed so promising, but a simple copy and paste doesn't work. using the basic file example. When you click upload a file and upload an image there's just no response, nothing in the console, no state change, nothing happens the page stays as is.
Simply using the demo code:
import { useState } from 'react';
import { useS3Upload } from 'next-s3-upload';
export default function UploadTest() {
let [imageUrl, setImageUrl] = useState();
let { FileInput, openFileDialog, uploadToS3 } = useS3Upload();
let handleFileChange = async file => {
let { url } = await uploadToS3(file);
setImageUrl(url);
};
return (
<div>
<FileInput onChange={handleFileChange} />
<button onClick={openFileDialog}>Upload file</button>
{imageUrl && <img src={imageUrl} />}
</div>
);
}
Hi, sorry if I can't provide many details, I don't really know where to start to debug this.
I have a next.js app using next-auth and next-s3-upload.
When I upload a file using my browser (the one used to develop the application) it works fine.
When I upload a file using another browser (even on another computer) I get a 403 error.
the response from S3 is
<Error> <Code>SignatureDoesNotMatch</Code> <Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message>
This points me to the aws configuration, which I made following your instructions. Any idea?
thanks.
If API route isn't setup correctly we should throw an error. The upload
function will get back an invalid response from the next server.
I ran into issues with the suggested CORS config once I tried uploading an image that was large. We discovered that the AWS SDK switches to a multipart upload once the file is large enough, which involves using a POST request and ETags to orchestrate.
Here's the updated CORS config I needed to allow this to successfully go through:
Here's the raw JSON:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"PUT",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag"
]
}
]
Show an error message if I forget to add my ENV variables.
See #2
On the doc, the policy includes
"Resource": ["arn:aws:sts::ACCOUNT_ID:federated-user/S3UploadWebToken"]
This thrown a legacy parsing error on AWS. Instead I used three ::: instead of two :: just before the ACCOUNT_ID and it worked.
Can you review whether this is the correct method to resolve the error please?
Edit: Please ignore above, looks like I have omitted ":federated-user/S3UploadWebToken" after replacing the ACCOUNT_ID. The matter is resolved now.
Why is everything let
? This is not right and every linter screams about it.
Please, migrate to constants everywhere, no let
is reassigned anyways.
all .env var used in my project are defined by STS and I am not able to use
Can we show progress of file while its uploading on S3?
Hi thanks for the library, it has definitely saved me some hours of work! I'm not sure if this is within scope of the project, but should probably be mentioned in the docs. With the current permissions described set for the AWS role, the user can upload any sort of file.
In the common use case of displaying an uploaded image right away using the next/image component, Next will generate an optimized/cached version of the file that will be available at url like <site domain>/_next/image?url=<uri encoded original image url>
. If the file uploaded is not an actual image file Next will return the file as it was uploaded. This leads to a potential XSS vulnerability because a user can upload an html file that will load at the URL described above and can run arbitrary javascript as though it were the site. Obviously checks can be done on the client to make sure that the file is the correct type, but if a bad actor wanted to, I believe (though not entirely sure) they could still use the headers from a valid request to make another authenticated call to upload a malicious file and manually generate the link to access it through the next site.
My suggestion for this is simply restricting the permissions of the IAM role set up for the uploader to the allowed file types. Something like:
{
"Sid": "S3UploadAssets",
"Effect": "Allow",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::<bucket-name>",
"arn:aws:s3:::<bucket-name>/*.jpg",
"arn:aws:s3:::<bucket-name>/*.png",
"arn:aws:s3:::<bucket-name>/*.gif"
]
}
Let me know if that makes sense and thanks again!
Thanks for the awesome library, It makes it easy to work with s3 uploads,
It would be great to add support for uploading multiple images.
Is there any way we can upload file directly from the js, instead of using next API, Because next js only support 4MB file size
Hello!
I've been using next-s3-upload for a couple of months with no problem (happy user here!). Yesterday, a user tried to upload a 6gb file (MOV file) and told me that the upload wasn't working.
I checked it and I found an error when trying to getFileContents() in use-s3-upload.tsx file. When the FileReader try to readAsArrayBuffer(file), the FileReader shows this error:
"The requested file could not be read, typically due to permission problems that have occurred after a reference to a file was acquired."
After a lot of researching I found other people with the same 6gb problem, and it seems that the only solution is to slice the file and do a multipart uploading.
The thing here is that S3 SDK handles directly this problem for us as it's indicated in the documentation.
Uploads an arbitrarily sized buffer, blob, or stream, using intelligent concurrent handling of parts if the payload is large enough. You can configure the concurrent queue size by setting options. Note that this is the only operation for which the SDK can retry requests with stream bodies.
I tried inserting directly the file in s3 params, and it works!
let s3 = new S3({
accessKeyId: data.token.Credentials.AccessKeyId,
secretAccessKey: data.token.Credentials.SecretAccessKey,
sessionToken: data.token.Credentials.SessionToken,
});
let params = {
ACL: 'public-read',
Bucket: data.bucket,
Key: data.key,
Body: file,
CacheControl: 'max-age=630720000, public',
ContentType: file.type,
};
let s3Upload = s3.upload(params);
Makes sense, or do you specifically need to use an ArrayBuffer for some reason?
Thank you very much for your work here!
Would be a great addition to be able to pass an optional config object to resize huge files. Maybe with sharp(?)
Hi
Locally file upload is working but production not working even if we build products locally not working.
hit hits API and gives us all the information, but the file is not uploading, no luck finding the issue
`import { APIRoute} from 'next-s3-upload';
import { v4 as uuidv4 } from 'uuid';
const S3_UPLOAD_FOLDER = process.env.S3_UPLOAD_FOLDER
export default APIRoute.configure({
key(req, filename) {
return `${S3_UPLOAD_FOLDER}/${uuidv4()}/${filename}`;
},
});
`
.env.development and .env.production
`S3_UPLOAD_KEY=key
S3_UPLOAD_SECRET=secret
S3_UPLOAD_BUCKET=blockstars.io
S3_UPLOAD_REGION=ap-southeast-1
S3_UPLOAD_FOLDER=assets-pro`
API is returning data, no error found but file is not uploading..
This is the return type fro production API
{
"token":{
"ResponseMetadata":{
"RequestId":"4529b8d7-b70b-4423-841c-a57c830865de"
},
"Credentials":{
"AccessKeyId":"ASIAW6R7D3R7I-**********",
"SecretAccessKey":"jRIHCld3VQnmodBz7yFC-*******",
"SessionToken":"FwoGZXIvYXdzEC8aDNtdUkEzUcewQ2DgfiK3An5dxeY476cC7ZeWoRhnc8RkbF+lfQ235daA7+oI7ruWFfYkXvJqx8VND/QJ4nl17KIdUgwJcoJqiwIok66jQC+GzU7AYWWxcRB6gH9YSzMPG0KQvh+CDg3mGKqn5a1L9McDMvrBbvWqBtDTBNkrUlYy5vPfH/WZekCjEVenkwoB3TWLZBzZPLe6AtSK2RwcZGJHv4h1nV5KkryanXepaVWy5GBydeqH8XgPjPG9tttEu4R1tOv1j1qYie/WCuhNuarg4HKrTB1My8yz5ivda/itbFsxz5mvD1c7k1BIX/Ku+DZs1gHe1CTNSkK6RZ4iB8LAxXBTy/oIO3JBUFV8t/UxFc7xSDhEohg12oWh5x5jwvPLF9cDIG39OS/b24w8aURjUAmB3jv8FFUeFZxUFJ8iCJ4nUYWrKJSw6I8GMimIDbIGVPhlBdpAK+Dev+796mhqxQvpJnyvKTGeysg1phAaEIQUigd9cw==",
"Expiration":"2022-02-02T06:35:16.000Z"
},
"FederatedUser":{
"FederatedUserId":"477947485310:S3UploadWebToken",
"Arn":"arn:aws:sts::477947485310:federated-user/S3UploadWebToken"
},
"PackedPolicySize":33
},
"key":"assets-pro/b82a56b4-3b7c-4be3-ac5f-0792b0ff9aea/5WQbTo8U_400x400.jpeg",
"bucket":"blockstars.io",
"region":"ap-southeast-1"
}
This is the Development API return
{
"token":{
"ResponseMetadata":{
"RequestId":"8469386c-63a5-44ae-82b9-59e1ab0f0504"
},
"Credentials":{
"AccessKeyId":"ASIAW6R7D3R7M5ME2-*****",
"SecretAccessKey":"rs7S+KLBojbfrYwz9Ny0KM8BcHh-*****",
"SessionToken":"FwoGZXIvYXdzEC8aDN9oS7qCookJKVTBbyKzAoc5IiKn7ts7G/M42GfDEGKBmqTzaTGQIpaZgQVfCQwDf4QbmIrQSZqnuUiVWvP1REgicocO9+NkI5PFNwCkcJx1Nv8OlGhBI1OZI0Zjr+uDufbgMt/S1TvDjctiWKfKAjGKXhqAIGlbaxzIfs5q28K/cJrgcSfkN9FE+1P6ffPelKFj6ranQsEx0f1y/dAoqJZjP5g3gdr+qIlcLOYoxoVZ4urUwAiOhjO5VYnyF5rD7A2uUn5oJ5PrCWXTxpluxFBFA/vhxy7vA+iG1nd2SOf8X0D3ShcGw2rR3rnO/iSMe6ED/oNMztxCkpEKERUC48V98980DKW/ofnRe9EJbSY3chtqbz+G5wKU0XvXLU+Jp/oUGikkYpUw5oE6d1qRZjedqXRyeVifp9+ouDv7GYjfvZkoibPojwYyKXr/H4NU7t7PepxEEMyoeoCjdjHqTAf70lNByDYRWGj14dg8u+oDC/3g",
"Expiration":"2022-02-02T06:41:29.000Z"
},
"FederatedUser":{
"FederatedUserId":"477947485310:S3UploadWebToken",
"Arn":"arn:aws:sts::477947485310:federated-user/S3UploadWebToken"
},
"PackedPolicySize":32
},
"key":"assets/9911b488-6b8c-457f-a7e7-04d4e7d7be4f/5WQbTo8U_400x400.jpeg",
"bucket":"blockstars.io",
"region":"ap-southeast-1"
}
Context:
AWS Amplify,
node v.14
NextJS 12
I am consistently receiving the following error from Next-s3-upload:
Next S3 Upload: Missing ENVs S3_UPLOAD_KEY, S3_UPLOAD_SECRET, S3_UPLOAD_REGION, S3_UPLOAD_BUCKET
I have logged the values into Cloudwatch logs from within the lambda that handles the file upload:
// pages/api/s3-upload.js
import { APIRoute } from 'next-s3-upload';
console.log('S3_UPLOAD_KEY', process.env.S3_UPLOAD_KEY);
console.log('S3_UPLOAD_SECRET', process.env.S3_UPLOAD_SECRET);
console.log('S3_UPLOAD_BUCKET', process.env.S3_UPLOAD_BUCKET);
console.log('S3_UPLOAD_REGION', process.env.S3_UPLOAD_REGION);
export default APIRoute.configure({
key(req, filename) {
return `inputs/${filename}`;
},
});
and they prove that the values are actually available within the lambda, I won't include here as they are of course secrets.
I don't understand why this library is not picking this up, especially as it WORKS locally..... any clues anyone?
Can I see an example with a verification configuration where the size of the image and image type are checked from the API/imageUpload route.
Hi,
I tried following the docs for custom file input here
but the file is not uploaded, and the following error is shown:
Unhandled Runtime Error
Error: Unsupported body payload object
Hi, I think this library looks very promising. I was wondering if you have any plans or thoughts on updating to v3 of aws-sdk?
I think the appropriate API docs to check out is the one I've linked below. I could be wrong but I thought I'd save you some time googling :)
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-sts/index.html
Hello! My name is Jinsu. Thanks for make this library it's easy and awesome!.
Nowadays I make a new side project with my friends. and we'll going to use this library for - image upload (Next.js - S3 - Vercel)
When i Test this function in localhost it's work fine, but in production didn't work with 404 error.
Can i know how to solove this ?
The code is same with each other. so this issue seems like env settings.. May be IAM problem.. (not sure)
I'd appreciate it if you could answer. Thanks for Read this issue and supporthing! .
[S3 Settings]
[IAM Setting]
Hi,
Currently, I am uploading files to S3 using multerS3Storage
. That works fine, but after deploying code to Vercel cloud I am hitting an issue with max payload size of 5MB (https://vercel.com/docs/concepts/limits/overview#serverless-function-payload-size-limit).
So I would like to ask you as I don't understand the lib 100%. Is the lib uploading files stright to S3 and omits API or it requires proxing through API?
Thanks for the great looking repository.
It would be great if the documentation could include an example about how to use the APIRoute
with custom authentication.
What I'm trying to achieve personally is to use withIronSessionApiRoute
as in the example here https://github.com/vercel/next.js/blob/canary/examples/with-iron-session/pages/api/events.ts
The next-s3-upload
would otherwise allow anyone to use the uploader, so this should definitely be documented from the beginning.
Hello! I just found and implemented this library into my project but would like a way to customize the path of my files.
A simple path parameter that we could pass alongside our file in uploadToS3
would be fantastic.
If there is already a way to do this, adding that to the docs would be super helpful. Thanks!
Hi there,
After an hour of looking over configuration and re-running through the instructions, I'm getting a persistent 403. The text of the error AWS gives is SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
.
Has anyone bumped into this before? I've checked the .env variables and am just using the stock example code from the README.
Thanks for your work on this project!
Hey it would be awesome if you could add a delete image feature to this. Or maybe I could add it. Would need to look at this repo more.
Hi There!
when I click s3 URL file is downloaded instead of opening on the tab, How to fix this?
I'm trying to use this library with DigitalOcean Spaces as it uses the same s3 compatible storage api. But I'm not sure how to set the Endpoint of digitalocean in the library.
I have deployed a Next Js app on Heroku.
In development I can upload images to the s3 bucket but not in production.
I have checked the env variables in my Heroku dino. I am using the same vars as in dev.
When I am uploading the image I get only a 200 GET request but the PUT request never happens.
Do I need to add any other env variables other than these?
Thanks for you answer.
S3_UPLOAD_KEY
S3_UPLOAD_SECRET
S3_UPLOAD_REGION
S3_UPLOAD_BUCKET
"next": "^12.0.3",
"next-s3-upload": "^0.1.7",
Hey!
Are you going to implement a way to provide support for private buckets and to implement a way to retrieve a signedUrl for the uploaded images?
How I can modified this lib, so that images will upload on the local server to public/images?
It'd be nice to be able to pass a headers
object to useS3Upload
hook. Other fetch options like method
could be useful as well.
Why? For example, Blitz.js requires all incoming API requests to have an anti-csrf header.
Thanks!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.