Coder Social home page Coder Social logo

parse-server-s3-adapter's Introduction

Parse Server S3 File Adapter

Build Status Snyk Badge Coverage auto-release

npm latest version


The official AWS S3 file storage adapter for Parse Server. See Parse Server S3 File Adapter Configuration for more details.


Installation

npm install --save @parse/s3-files-adapter

AWS Credentials

⚠️ The ability to explicitly pass credentials to this adapter is deprecated and will be removed in a future release.

You may already be compatible with this change. If you have not explicitly set an accessKey and secretKey and you have configured the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, then you're all set and this will continue to work as is.

If you explicitly configured the environment variables S3_ACCESS_KEY S3_SECRET_KEY

OR

If you explicitly configured the accessKey and secretKey in your adapter configuration, then you'll need to...

For non AWS hosts:

  • Run aws configure in a terminal which will step you through configuring credentials for the AWS SDK and CLI

For an AWS host:

  • Ensure that the role that your host is running as has permissions for your s3 bucket

Then

  • remove the accessKey and secretKey from your configuration

If for some reason you really need to be able to set the key and secret explicitly, you can still do it using s3overrides as described below and setting accessKeyId and secretAccessKey in the s3Overrides object.

Deprecated Configuration

Although it is not recommended, AWS credentials can be explicitly configured through an options object, constructor string arguments or environment variables (see below). This option is provided for backward compatibility and will be removed in the forthcoming version 2.0 of this adapter.

The preferred method is to use the default AWS credentials pattern. If no AWS credentials are explicitly configured, the AWS SDK will look for credentials in the standard locations used by all AWS SDKs and the AWS CLI. More info can be found in the docs. For more information on AWS best practices, see IAM Best Practices User Guide.

Usage with Parse Server

Parameters

(This list is still incomplete and in the works, in the meantime find more descriptions in the chapters below.)

Parameter Optional Default value Environment variable Description
fileAcl yes undefined S3_FILE_ACL Sets the Canned ACL of the file when storing it in the S3 bucket. Setting this parameter overrides the file ACL that would otherwise depend on the directAccess parameter. Setting the value 'none' causes any ACL parameter to be removed that would otherwise be set.
presignedUrl yes false S3_PRESIGNED_URL If true a presigned URL is returned when requesting the URL of file. The URL is only valid for a specified duration, see parameter presignedUrlExpires.
presignedUrlExpires yes undefined S3_PRESIGNED_URL_EXPIRES Sets the duration in seconds after which the presigned URL of the file expires. If no value is set, the AWS S3 SDK default Expires value applies. This parameter requires presignedUrl to be true.

Using a config file

{
  "appId": 'my_app_id',
  "masterKey": 'my_master_key',
  // other options
  "filesAdapter": {
    "module": "@parse/s3-files-adapter",
    "options": {
      "bucket": "my_bucket",
      // optional:
      "region": 'us-east-1', // default value
      "bucketPrefix": '', // default value
      "directAccess": false, // default value
      "fileAcl": null, // default value
      "baseUrl": null, // default value
      "baseUrlDirect": false, // default value
      "signatureVersion": 'v4', // default value
      "globalCacheControl": null, // default value. Or 'public, max-age=86400' for 24 hrs Cache-Control
      "presignedUrl": false, // Optional. If true a presigned URL is returned when requesting the URL of file. The URL is only valid for a specified duration, see parameter `presignedUrlExpires`. Default is false.
      "presignedUrlExpires": null, // Optional. Sets the duration in seconds after which the presigned URL of the file expires. Defaults to the AWS S3 SDK default Expires value.
      "ServerSideEncryption": 'AES256|aws:kms', //AES256 or aws:kms, or if you do not pass this, encryption won't be done
      "validateFilename": null, // Default to parse-server FilesAdapter::validateFilename.
      "generateKey": null // Will default to Parse.FilesController.preserveFileName
    }
  }
}

Note By default Parse.FilesController.preserveFileName will prefix all filenames with a random hex code. You will want to disable that if you enable it here or wish to use S3 "directories".

using environment variables

Set your environment variables:

S3_BUCKET=bucketName

the following optional configuration can be set by environment variable too:

S3_SIGNATURE_VERSION=v4

And update your config / options

{
  "appId": 'my_app_id',
  "masterKey": 'my_master_key',
  // other options
  "filesAdapter": "@parse/s3-files-adapter"
}

passing as an instance

var S3Adapter = require('@parse/s3-files-adapter');

var s3Adapter = new S3Adapter(
  'accessKey',
  'secretKey',
  'bucket',
  {
    region: 'us-east-1'
    bucketPrefix: '',
    directAccess: false,
    baseUrl: 'http://images.example.com',
    signatureVersion: 'v4',
    globalCacheControl: 'public, max-age=86400',  // 24 hrs Cache-Control.
    presignedUrl: false,
    presignedUrlExpires: 900,
    validateFilename: (filename) => {
      if (filename.length > 1024) {
          return 'Filename too long.';
        }
        return null; // Return null on success
    },
    generateKey: (filename) => {
      return `${Date.now()}_${filename}`; // unique prefix for every filename
    }
  }
);

var api = new ParseServer({
  appId: 'my_app',
  masterKey: 'master_key',
  filesAdapter: s3adapter
})

Note: there are a few ways you can pass arguments:

S3Adapter("bucket")
S3Adapter("bucket", options)
S3Adapter("key", "secret", "bucket") -- Deprecated, see notice above
S3Adapter("key", "secret", "bucket", options) -- Deprecated, see notice above
S3Adapter(options) // where options must contain bucket.
S3Adapter(options, s3overrides)

If you use the last form, s3overrides are the parameters passed to AWS.S3.

In this form if you set s3overrides.params, you must set at least s3overrides.params.Bucket

or with an options hash

var S3Adapter = require('@parse/s3-files-adapter');

var s3Options = {
  "bucket": "my_bucket",
  // optional:
  "region": 'us-east-1', // default value
  "bucketPrefix": '', // default value
  "directAccess": false, // default value
  "baseUrl": null // default value
  "signatureVersion": 'v4', // default value
  "globalCacheControl": null, // default value. Or 'public, max-age=86400' for 24 hrs Cache-Control
  "presignedUrl": false, // default value
  "presignedUrlExpires": 900, // default value (900 seconds)
  "validateFilename": () => null, // Anything goes!
  "generateKey": (filename) => filename,  // Ensure Parse.FilesController.preserveFileName is true!
}

var s3Adapter = new S3Adapter(s3Options);

var api = new ParseServer({
  appId: 'my_app',
  masterKey: 'master_key',
  filesAdapter: s3Adapter
})

Usage with Digital Ocean Spaces

var S3Adapter = require("@parse/s3-files-adapter");
var AWS = require("aws-sdk");

//Configure Digital Ocean Spaces EndPoint
const spacesEndpoint = new AWS.Endpoint(process.env.SPACES_ENDPOINT);
var s3Options = {
  bucket: process.env.SPACES_BUCKET_NAME,
  baseUrl: process.env.SPACES_BASE_URL,
  region: process.env.SPACES_REGION,
  directAccess: true,
  globalCacheControl: "public, max-age=31536000",
  presignedUrl: false,
  presignedUrlExpires: 900,
  bucketPrefix: process.env.SPACES_BUCKET_PREFIX,
  s3overrides: {
    accessKeyId: process.env.SPACES_ACCESS_KEY,
    secretAccessKey: process.env.SPACES_SECRET_KEY,
    endpoint: spacesEndpoint
  }
};

var s3Adapter = new S3Adapter(s3Options);

var api = new ParseServer({
  databaseURI: process.env.DATABASE_URI || "mongodb://localhost:27017/dev",
  cloud: process.env.CLOUD_CODE_MAIN || __dirname + "/cloud/main.js",
  appId: process.env.APP_ID || "myAppId",
  masterKey: process.env.MASTER_KEY || "",
  serverURL: process.env.SERVER_URL || "http://localhost:1337/parse",
  logLevel: process.env.LOG_LEVEL || "info",
  allowClientClassCreation: false,
  filesAdapter: s3Adapter,
  }
});

Adding Metadata and Tags

Use the optional options argument to add Metadata and/or Tags to S3 objects



const S3Adapter = require('@parse/s3-files-adapter');

const s3Options = {}; // Add correct options
const s3Adapter = new S3Adapter(s3Options);

const filename = 'Fictional_Characters.txt';
const data = 'That\'s All Folks!';
const contentType = 'text/plain';
const tags = {
  createdBy: 'Elmer Fudd',
  owner: 'Popeye'
};
const metadata = {
  source: 'Mickey Mouse'
};
const options = { tags, metadata };
s3Adapter.createFile(filename, data, contentType, options);

Note: This adapter will automatically add the "x-amz-meta-" prefix to the beginning of metadata tags as stated in S3 Documentation.

parse-server-s3-adapter's People

Contributors

acinader avatar andrewalc avatar c0nstructer avatar cbaker6 avatar danielsanfr avatar davidruisinger avatar davimacedo avatar dependabot[bot] avatar dplewis avatar dsix-work avatar flovilmart avatar greenkeeper[bot] avatar joeyslack avatar jsuresh avatar kbog avatar kylebarron avatar m12331 avatar mrmarcsmith avatar mtrezza avatar parseplatformorg avatar seijiakiyama avatar semantic-release-bot avatar snyk-bot avatar stevestencil avatar ststroppel avatar tmocellin avatar tomwfox avatar uzaysan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

parse-server-s3-adapter's Issues

Add BackBlaze Support to Docs

Thanks to @uzaysan, realized we could use BackBlaze instead of S3. Needed a bit of tinkering until I got it to work, but I figured it would be nice to have this in the read me. These are the settings I used to make it work :

const s3Adapter = new S3Adapter({
  bucket: process.env.S3_BUCKET,
  directAccess: true,
  baseUrl: process.env.S3_BASE_URL, // taken from BackBlaze, normally https://BUCKET.s3.REGION.backblazeb2.com
  baseUrlDirect: false,
  signatureVersion: 'v4',
  globalCacheControl: 'public, max-age=86400',
  region: 'us-west-000',
  s3overrides: {
    endpoint: process.env.S3_ENDPOINT, // check backblaze bucket endpoint
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY
  },
});

Files are not deleted on AWS S3 after being deleted on Parse-Dashboard.

I'm not sure if it's an issue, or if it's intended, but when on Parse-Dashboard, replacing/deleting a file does not delete it from AWS S3 bucket too. Not much of a problem, but in the long run, the bucket will be full of unused files...

If it's an issue, I can try to PR something.

The S3 file adapter seems not working

I used S3 file adapter in my project with parse-server 2.2.4. Here is the code:

var S3Adapter = require('parse-server').S3Adapter

var api = new ParseServer({
  databaseURI:  config.dbUrl,
  cloud:        __dirname + '/cloud/main.js',
  appId:        config.appId,
  masterKey:    config.masterId, 
  clientKey:    'clientKey',
  serverURL:    config.serverUrl,
  fileAdapter: new S3Adapter(
    s3config.accessKey,
    s3config.secretKey,
    s3config.bucket,
    {directAccess: true,region:s3config.region}
  ),

But the file is not saved to my S3 bucket when I used REST API to upload a file like this:

curl -X POST \
  -H "X-Parse-Application-Id: myAppId" \
  -H "Content-Type: text/plain" \
  -d 'Hello, World!' \
  http://localhost:1337/parse/files/hello.txt

It returns:

{"url":"http://localhost:1337/parse/files/myAppId/af5480f12aeaa449a3b424d4d8386670_hello.txt","name":"af5480f12aeaa449a3b424d4d8386670_hello.txt"}

Apparently it's not in S3 bucket. (I checked my S3 bucket, it's empty).

To make sure the S3 configuration is correct, I wrote a simple Nodejs app with the same S3. It works well. So the problem is either in parse-server or the S3 file adapter. Any idea? Thanks.

Using parse-server-s3-adapter with Cognito?

Hi,

I'm not very well versed in S3 and I'm not a Parse server expert, so please forgive my ignorance.

I have an iOS app with a Parse backend in development. Currently, I'm saving and retrieving S3 files on the iOS client, mostly because I was unaware of the ability to use an adapter such as this. I'd like to migrate to this adapter if possible.

I was sort of stopped dead in my tracks by (as usual) Amazon S3 security. I use Cognito, which is recommended by Amazon and I don't (think I) pass keys and secrets. It seems that this adapter does not support Cognito access.

Can someone point me in the right direction please?

Thanks,
Peter...

Endpoint not correctly overridden

I'm following the Parse Server guide trying to set up the file adapter to work with Digital Ocean spaces, which uses an S3-compliant API. I've essentially copy-pasted the guide's example, but it seems like the endpoint override isn't carrying through to the actual file adapter object.

Example code:

require("dotenv").config();

var S3Adapter = require("parse-server").S3Adapter;
var AWS = require("aws-sdk");

var spacesEndpoint = new AWS.Endpoint(process.env.SPACES_ENDPOINT);

var s3Options = {
  bucket: process.env.S3_BUCKET_NAME,
  baseUrl: process.env.S3_BASE_URL,
  region: process.env.S3_REGION,
  directAccess: true,
  globalCacheControl: "public, max-age=31536000",
  bucketPrefix: process.env.S3_BUCKET_PREFIX,
  s3overrides: {
    accessKeyId: process.env.S3_ACCESS_KEY,
    secretAccessKey: process.env.S3_SECRET_KEY,
    endpoint: spacesEndpoint
  }
};

var s3AccessKey = process.env.S3_ACCESS_KEY;
var s3SecretKey = process.env.S3_SECRET_KEY;
var s3Bucket = process.env.S3_BUCKET_NAME;
var filesAdapter = new S3Adapter(s3AccessKey, s3SecretKey, s3Bucket, s3Options);

console.log(spacesEndpoint);
console.log(s3Options);
console.log(filesAdapter);

Where the environment variables are:

S3_ACCESS_KEY=
S3_BASE_URL=https://nst-guide-parse.nyc3.digitaloceanspaces.com
S3_BUCKET_NAME=nst-guide-parse
S3_BUCKET_PREFIX=parse-files/
S3_REGION=nyc3
S3_SECRET_KEY=
SPACES_ENDPOINT=nyc3.digitaloceanspaces.com

And output:

Endpoint {
  protocol: 'https:',
  host: 'nyc3.digitaloceanspaces.com',
  port: 443,
  hostname: 'nyc3.digitaloceanspaces.com',
  pathname: '/',
  path: '/',
  href: 'https://nyc3.digitaloceanspaces.com/'
}
{
  bucket: 'nst-guide-parse',
  baseUrl: 'https://nst-guide-parse.nyc3.digitaloceanspaces.com',
  region: 'nyc3',
  directAccess: true,
  globalCacheControl: 'public, max-age=31536000',
  bucketPrefix: 'parse-files/',
  s3overrides: {
    accessKeyId: 'redacted',
    secretAccessKey: 'redacted',
    endpoint: Endpoint {
      protocol: 'https:',
      host: 'nyc3.digitaloceanspaces.com',
      port: 443,
      hostname: 'nyc3.digitaloceanspaces.com',
      pathname: '/',
      path: '/',
      href: 'https://nyc3.digitaloceanspaces.com/'
    }
  }
}
S3Adapter {
  _region: 'nyc3',
  _bucket: 'nst-guide-parse',
  _bucketPrefix: 'parse-files/',
  _directAccess: true,
  _baseUrl: 'https://nst-guide-parse.nyc3.digitaloceanspaces.com',
  _baseUrlDirect: false,
  _signatureVersion: 'v4',
  _globalCacheControl: 'public, max-age=31536000',
  _encryption: undefined,
  _s3Client: Service {
    config: Config {
      credentials: [Credentials],
      credentialProvider: [CredentialProviderChain],
      region: 'nyc3',
      logger: null,
      apiVersions: {},
      apiVersion: null,
      endpoint: 's3.nyc3.amazonaws.com',
      httpOptions: [Object],
      maxRetries: undefined,
      maxRedirects: 10,
      paramValidation: true,
      sslEnabled: true,
      s3ForcePathStyle: false,
      s3BucketEndpoint: false,
      s3DisableBodySigning: true,
      computeChecksums: true,
      convertResponseTypes: true,
      correctClockSkew: false,
      customUserAgent: null,
      dynamoDbCrc32: true,
      systemClockOffset: 0,
      signatureVersion: 'v4',
      signatureCache: true,
      retryDelayOptions: {},
      useAccelerateEndpoint: false,
      clientSideMonitoring: false,
      endpointDiscoveryEnabled: false,
      endpointCacheSize: 1000,
      hostPrefixEnabled: true,
      stsRegionalEndpoints: null,
      params: [Object],
      globalCacheControl: 'public, max-age=31536000',
      accessKeyId: 'redacted',
      secretAccessKey: 'redacted'
    },
    isGlobalEndpoint: false,
    endpoint: Endpoint {
      protocol: 'https:',
      host: 's3.nyc3.amazonaws.com',
      port: 443,
      hostname: 's3.nyc3.amazonaws.com',
      pathname: '/',
      path: '/',
      href: 'https://s3.nyc3.amazonaws.com/'
    },
    _events: { apiCallAttempt: [Array], apiCall: [Array] },
    MONITOR_EVENTS_BUBBLE: [Function: EVENTS_BUBBLE],
    CALL_EVENTS_BUBBLE: [Function: CALL_EVENTS_BUBBLE],
    _clientId: 1
  },
  _hasBucket: false
}

You can see that the Endpoint is pointing to the Digital Ocean spaces URL in s3Options, but once passed to S3Adapter(), the values are passed back to pointing to AWS.

file upload progressBar callback

Hello Guys

I am using this adapter in parse server to upload and it works absolutely fine but i want to integrate progress Bar with file upload but i did't find is this functionality available in this module or not because. i didn't get any callback during upload. i am using Rest API in my project.

Thanks

Wrong Date in parsed by bucketPrefix

Hi,
Thank you for this great package and your effort.
I'm facing trouble that when I set the path of bucketPrefix by Date, For example:

    const date = new Date();
    const year = date.getFullYear();
    const month = date.getMonth() + 1;
    const day = date.getDate();

    const imagePath = `uploaded/${year}/${month}/${day}/`

    var s3Adapter = new S3Adapter('test', {
      bucketPrefix: imagePath,
    });

The first day I execute this code the path is working fine,
Example: uploaded/2020/4/15/

But after a complete day of running in PM2, the date become wrong,
Example: should be shown (uploaded/2020/4/16/) but instead it shows (uploaded/2020/4/15/)

I checked the date using Parse Cloud and it's working right

I know it's a wired issue but I thought I muse report about it.

Best regards.
Abdulaziz Noor

Upgrade to AWS JS SDK v3 needed

Today I started the alpha build of parse-server using the parse-server-s3-adapter and received this (for me new) warning message:

(node:1) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.

Please migrate your code to use AWS SDK for JavaScript (v3).
For more information, check the migration guide at https://a.co/7PzMCcy

This seems to be coming from the fact that this repo depends on aws-sdk: 2

"aws-sdk": "2.1368.0"

Detailed migration instructions seem to be available here: https://github.com/aws/aws-sdk-js-v3/blob/main/UPGRADING.md

Tests are failing

The tests are failing since 7/27/18. (I still don't know why Travis stopped emailing when builds break :().

I suspect that the problem is because wherever the tests are saving files to and with whatever credentials they are using is no longer valid.

The test that is failing is included from another repo: https://github.com/parse-community/parse-server-conformance-tests/blob/master/specs/files.js

The location of the files and the credentials being used are encrypted in the .travis file.

I use this adapter. It is simple. Almost all of the tortured logic is option handling which is well covered.

I do not want to: setup a bucket for tests, set its permissions, test it locally without my own credentials being involved, encode it, deploy and test the deployment.

So, unless someone else wants to take this one, i see two options:

  1. just leave it alone. the lodash security issue almost certainly cannot be exploited here (but that's a guess)

  2. rip out the conformance dependency and also remove the encrypted vars from travis.

  3. someone else steps up and gets the conformance/integration tests all straightened out.

My preference is #3 :). though I am not going to hold my breath...

So that leaves my preference as #2 which I could do in a few minutes today.

@davimacedo have a preference?

Don't accept Key and Secret as constructor arguments

Introduce a breaking change that would continue to allow a user to configure AWS credentials explicitly during adapter construction, but remove the discreet parameters from the constructor.

See #14 for links to AWS best practice docs and some discussion of this issue.

Currently standard configuration of AWS OR explicit credentials can be used. Accepting both makes both the documentation and argument processing complicated and error prone.

In addition, by not conforming to best practices, it promotes putting secret keys in places where they shouldn't go.

I'm proposing to:

  1. Remove the explicit constructor parameters for key and secret
  2. Remove the env vars S3_ACCESS_KEY & S3_SECRET_KEY as a configuration option for credentials (and make clear in the docs that AWS_ACCESS_KEY_ID & AWS_SECRET_ACCESS_KEY work)
  3. Rename the member of the config options from: accessKey & secretKey to accessKeyId & secretAccessKey to match the AWS SDK.
  4. Update documentation and tests to reflect change

I don't see anywhere where this change would require a doc change in parse-server, but a decision would have to made about when to either include the revision change in the parse-server package.json, or potentially removing all reference in code to this adapter and updating the documentation to show how to use it (which I think would be preferable as it would be a good model for how to use an adapter that is not explicitly referenced in parse-server.)

Clarification on where to put Keys ACCESS/SECRET

I think it would be great to clarify where these keys can be put in order to remove them from the config for the s3 file adaptor.

Reading the docs there are a few ways and some people may not have options to do one over the other.

For example using the AWS CLI may not work for people using Heroku, which leads to the question can a user add these then into the Heroku Admin/Environment area?

From Docs:
Environment Variables – AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, etc.

AWS_ACCESS_KEY_ID=xxxx
AWS_SECRET_ACCESS_KEY=xxx

I'm going to make the assumption this works or will work..

Label: Documentation clarification.

Pass S3Override as environmental variable

We use docker to spin up our parse servers, that leaves environmental variables as the cleanest way to pass information along. Is there a way to pass "s3OverrideOptions" as an environmental variable? if not, that would be an incredible feature for me.

Some S3 compatible storages don't support tags

We are using Digital Ocean spaces and they don't support tags. The API simply returns error when there are tags in file upload request. So that this file adapter don't work with Digital Ocean spaces now.

What I suggest is to send tags only when tags are actually presented in file options. It should be safe - when somebody wants to use tags in their project, it has to use storage with tags support. And from the other way - it has no benefit to send empty tags dictionary to the S3.

I created pull request, that handles this.

Content-Type is set to text/plain at image files

I'm using DigitalOcean's Spaces with my 2.8.4 Parse Server. I realized that when an image is uploaded, it has its Content-Type set to text/plain. Even when I download one directly from DigitalOcean's dashboard to my computer, it's saved as a .txt file! My app though displays them perfectly.

Here's my Parse Server setup:

....
var S3Adapter = require('parse-server').S3Adapter;
var AWS = require("aws-sdk");

//Set Digital Ocean Spaces EndPoint
const spacesEndpoint = new AWS.Endpoint('fra1.digitaloceanspaces.com');
//Define S3 options
var s3Options = {
  bucket: 'bucketName',
  baseUrl: 'https://domain.com',
  region: 'fra1',
  directAccess: true,
  globalCacheControl: "public, max-age=31536000",
  s3overrides: {
    accessKeyId: 'accessKey',
    secretAccessKey: ' secretAccessKey',
    endpoint: spacesEndpoint
  }
};

var s3Adapter = new S3Adapter(s3Options);

var api = new ParseServer({
  ....
  filesAdapter: s3Adapter
  ....
});

Any idea why?

When uploading image, wrong url

Migration went fine and all my images are located in bucket:
eg.: https://dishoftheday.s3.amazonaws.com/mfp_3dec89eThumb_btbUfnkKQS.jpg

But when I am trying to upload via javascript SDK it tries to upload to following location:
http://ec2-52-90-46-148.compute-1.amazonaws.com/parse/files/xxx.jpg

What am I missing?

My server.js:

var s3Options = {
"bucket": "dishoftheday",
"accessKey": "xxx",
"secretKey": "'xxx",
"region": 'us-east-1',
"bucketPrefix": '',
"directAccess": true,
"baseUrl": null,
"signatureVersion": 'v4',
"globalCacheControl": 'public, max-age=86400'
}

var s3Adapter = new S3Adapter(s3Options);

var api = new ParseServer({
databaseURI: "mongodb://dishoftheday:....",
cloud: "./cloud/main.js",
appId: "xxx",
masterKey: "xxx",
fileKey: "xxx",
clientKey: "xxx",
javascriptKey: "xxx",
serverURL: 'http://ec2-52-90-46-148.compute-1.amazonaws.com/parse/',
filesAdapter: s3Adapter,
push: {
ios: [{
pfx: './certs/pushDev.p12', // the path and filename to the .p12 file you exported earlier.
bundleId: 'dk.dida.dishoftheday', // The bundle identifier associated with your app
production: false // Specifies which environment to connect to: Production (if true) or Sandbox
}, {
pfx: './certs/pushProd.p12',
bundleId: 'dk.dida.dishoftheday',
production: true
}]
}
});

Multiple bucket and folder support

I have modified this adapter so that I can add url params to the file name to dictate which bucket and folder a file should be stored in. For example image.png?bucket=foo&folder=bar would store a file called image.png in the bar folder of the foo bucket. This allows me for example to modify the file name in the beforeSaveFile hook so that I can have each users files separated into folders and/or buckets. If there is an interest in adding this functionality i'd be happy to submit a PR

File URL not being replaced at runtime

Hi, I'm trying to use an s3 file adapter with my parse-server, and I think I'm experiencing some strange behavior.

URL variables:

PUBLIC_SERVER_URL=http://localhost:1337/parse
PARSE_SERVER_URL=http://localhost:1337/parse

(Note, I've tried this on my production server, which has localhost for PARSE_SERVER_URL and a publicly accessible domain for PUBLIC_SERVER_URL)

Expected:
https://PUBLIC_SERVER_URL/files/FILENAME.png

What I'm seeing:
https://BUCKET_NAME.s3.amazonaws.com/FILENAME.png

Initialization Code:

require('dotenv').config()
let express = require('express');
let ParseServer = require('parse-server').ParseServer;
var S3Adapter = require('@parse/s3-files-adapter');

var s3Adapter = new S3Adapter(process.env.S3_BUCKET, {
    region: process.env.S3_REGION,
    bucketPrefix: '',
    directAccess: false,
    signatureVersion: 'v4',
    globalCacheControl: 'public, max-age=86400'
});

let api = new ParseServer({
    databaseURI: databaseUri,
    cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
    appId: process.env.APP_ID,
    masterKey: process.env.MASTER_KEY,
    serverURL: process.env.PARSE_SERVER_URL,
    publicServerURL: process.env.PUBLIC_SERVER_URL,
    clientKey: process.env.CLIENT_KEY,
    filesAdapter: s3Adapter,
    fileKey: process.env.FILE_KEY,
    verbose: true,
    appName: 'MyApp',
    maxUploadSize: '300mb',
});

Versions:
"parse-server": "4.3.0"
"@parse/s3-files-adapter": "1.6.0"

Any insight as to what I may be doing wrong would be greatly appreciated. All of my file downloads are failing with a 403, but my uploads seem to be working. I'm not sure what the cause of that could be other than the url not being generated correctly at runtime.

NoSuchKey error for some files uploaded to s3

I use this adapter for a very long time now. Never had any issues. Now I get this error from time to time:

<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>
4f4e252ea6757eec0c90307b6e152f56_a6095bf7cea57864bf955f59741461a4_11dc6c3d96f7945768b2fb59449cd544.jpeg_thumbnail.jpg
</Key>
<RequestId>EBEC91AC41BCCD15</RequestId>
<HostId>
K+QlkcrNDlLaY3eNkON9/9AwTDQpUJCKhQ7YHfe0sAROFuV87Gq9Luvpq+lYdVZ5gS5LnLK8LvM=
</HostId>
</Error>

I see the link to the file in Parse Dashboard just fine. Looks like any other file link. But some of them simply return this error.

It is very difficult for me to reproduce this. I use the files for coverart images currently. When I delete the files via parse dashboard and rerun the thumbnail generator it generates the coverarts files just fine and I can access them as usual.

This is how I generate the thumbnails:

function generateThumbnail(channel, size, coverartThumbnailAttributeName){
	if (!channel.has(coverartThumbnailAttributeName)) {
		var coverartUrl = channel.get("coverart").url();
		if(coverartUrl === ""){
			logger.debug("coverart url empty. can not generate a thumbnail.");
			return Parse.Promise.as();
		}
		logger.debug("generating thumbnail for " + channel.get("name") + ". Size: " + coverartThumbnailAttributeName);

		return Parse.Cloud.httpRequest({
			url: coverartUrl,
			followRedirects: true
		}).then(function(response) {
			var image = new Image();
			return image.setData(response.buffer);
		}).then(function(image) {
			return image.resize({ width: size, height: size, ignoreAspectRatio: true });
		}).then(function(image) {
			return image.setFormat('JPEG');
		}).then(function(image) {
			return image.data();
		}).then(function(buffer) {
			logger.debug("Buffer returned, creating Parse File");
			var base64Buffer = buffer.toString("base64");
			var filename = channel.get("coverart").name() + "_thumbnail.jpg";
			var file = new Parse.File(filename, { base64: base64Buffer }, "image/jpeg");
			return file.save();
		}).then(function(file) {
			logger.debug("Thumbnail saved");
			channel.set(coverartThumbnailAttributeName, file);
		});
	}
	return Parse.Promise.as(); // this is already resolved promise
}

So only if file.save() succeeds a reference to the file is set to the channel object. Therefore I suspect the error in file.save() because it returns successfully but the created AWS file is not valid.

This started happening after changing my parse server version from 2.4.2 to 2.7.4.

Any ideas what could cause this issue?

Upload image base64 > 1mb return 413 error => HEROKU - PARSE SERVER

when i try to upload image i got this error:
Error: request entity too large
at readStream (/node_modules/raw-body/index.js:196:17)
at getRawBody (/Users/ettorepanini/Sites/giovani-promesse-2.0/node_modules/raw-
my setup is:

  region: 'eu-west-1',
  bucketPrefix: 'profile-img/',
  directAccess: false,
  signatureVersion: 'v4',
  globalCacheControl: 'public, max-age=86400'

this the bodyParser

app.use(bodyParser.json({limit: "50mb"))
app.use(bodyParser.urlencoded({extended: false}));

this error appear only for file > 1mb

 var imgData= url.replace(/^data:image\/(png|jpeg);base64,/, "");
var parseFile = new Parse.File(name, {base64:imgData});
console.log('Call save Parse file')
parseFile.save().then(function() {
  console.log(parseFile)
  Parse.Cloud.run('saveUserImage', {
  photo : parseFile,
  playerId : playerId,
}, {	
  success: function(result) {
											

in the parse server i have added this

maxUploadSize: process.env.MAX_UPLOAD_SIZE || '20mb',

I forgot something?
for file smaller then 1mb it works

AWS switching to longer id's

I am sure everyone has had this email (See below) , does this adapter support them ?


Dear Amazon Web Services Customer,

In less than 60 days, Amazon EC2, Amazon EBS, and AWS Storage Gateway are migrating to longer format resource IDs to support the ongoing growth of Amazon Web Services as announced back in December 2017. You can click on “Switch to longer IDs” in the AWS Management Console or use the APIs to receive longer IDs today.

Systems that parse or store resource IDs may not work with longer IDs. We strongly recommend that you test your systems interoperability with longer IDs. After June 2018, all new resources will receive longer IDs. We will continue to support shorter IDs for existing resources.

Additional information, including details of how to test your systems, can be found in the FAQs and docs. Please contact the AWS support team on the community forums or via AWS Premium Support if you have questions. Click below to switch to longer IDs now.

Upload serialization unsupported when using AWS S3 when add schema.graphql in config setup

My project is using Graphql, parse server and parse server cloud code. Im using AWS S3 to upload Images by using filesAdapter with module @parse/s3-files-adapter and it work perfect. But My project using some custom mutation so I need to add custom file Schema.graphql like this in config file of parse server.
Code like this: “graphQLSchema”: “./schema.graphql”.
Then I test to using Mutation upload file it don’t work like usual and it show the error like this.(BTW I’m using FireCamp, it’s a tool to help me write graphql code easier).
image

This is a code of Schema.graphql

Type declare

type SendVerificationCodePayload {
clientMutationId: String
status: Int!
messages: String!
}
type ConfirmVerificationCodePayload {
clientMutationId: String
status: Int!
messages: String!
data: User!
}
type User {
objectId: String!
phoneNumber: String!
}

type checkrCandidatesPayload {
messages: String!
}

Input declare

input sendVerificationCodeInput {
"Phone Number"
phoneNumber: String!
clientMutationId: String
}
input confirmVerificationCodeInput {
"Phone Number"
phoneNumber: String!
"Verify Code"
verifyCode: String!
clientMutationId: String!
}
input checkrCandidatesInput {
phone: String!
}
extend type Mutation {
"Send Verify code to login or register"
createSendVerificationCode(
input: sendVerificationCodeInput
): SendVerificationCodePayload @resolve(to: "sendVerificationCode")
"Send Verify code to login or register"
createConfirmVerificationCode(
input: confirmVerificationCodeInput
): ConfirmVerificationCodePayload @resolve(to: "confirmVerificationCode")
createCheckrCandidates(input: checkrCandidatesInput): checkrCandidatesPayload
@resolve(to: "checkrCandidates")
}

Wrong options type when set from environment variables

Hello.

While I was implementing PR: #117, I encountered a problem with the environment variables.

As you can see in the image below, the _baseUrlDirect variable has the correct type (boolean), as it was not set via the environment variable. However, the other two _directAccess and _presignedUrl variables that were set as environment variables, are not being converted to the correct type (it should be boolean).

2020-11-17_00-45

This problem causes a check of the form if (_directAccess) { ... }, do not be verified as expected.

I would like to know if I can create a PR using the https://www.npmjs.com/package/boolean lib to do this conversion?

I don't know if you noticed, but the _presignedUrlExpires variable should be an integer, but it is also a string. However, this case can be solved with the standard parseInt() Javascript library.

Presigned URLs

This is a feature suggestion/request. It would be nice to be able to retrieve a presigned URL with dynamic expiration from the Amazon S3 bucket when directAccess is enabled. This would allow more security by allowing us to lock down the S3 bucket with public access turned off, and still provide the benefit of not proxy'ing the download through the parse server.

credentials are null when deployed on remote server, has values on local server

I'm setting up the S3 adapter, and I have an issue using this adapter when deployed to my Heroku server. The issue is that credentials are null on the s3 adapter object.

Running parse server 2.7.1, s3-files-adaper 1.2.1

Here's my api configuration:

//s3
var s3Options = {
	bucket:process.env.S3_BUCKET,
	region:'xxx',
	ServerSideEncryption: 'AES256'
}
var s3Adapter = new S3Adapter(s3Options);
console.log(JSON.stringify(s3Adapter, null, 2))

//api
var masterKey = process.env.MASTER_KEY || 'xxxx'
var clientKey = process.env.CLIENT_KEY || 'xxx'
var appId = process.env.APP_ID || 'xxx'
var javascriptKey = process.env.JAVASCRIPT_KEY || 'xxx'
var serverURL = process.env.SERVER_URL || 'xxx'
var publicServerURL = process.env.PUBLIC_SERVER_URL || 'xxx'
var api = new ParseServer({
	databaseURI: process.env.MONGO_URI || 'xxx',
	cloud: xxx,
	appId: appId,
	clientKey: clientKey,
	javascriptKey: javascriptKey,
	fileKey: process.env.FILE_KEY,
	masterKey: masterKey,
	serverURL: serverURL,
	publicServerURL: publicServerURL,
	filesAdapter: s3Adapter,
	auth: {
		facebook: {
			appIds: ["xxx"]
		}
	},
	facebookAppIds:["xxx"]
})
...

If I'm understanding the docs correctly, then setting the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY should be enough for the S3 configuration. I've set both of these values locally (using a .env file with dotenv) and in my config vars on Heroku.

When I log s3Adapter locally (notice that credentials have values):

{
  "_region": "xxx",
  "_bucket": "xxx",
  "_bucketPrefix": "",
  "_directAccess": xxx,
  "_baseUrl": null,
  "_baseUrlDirect": false,
  "_signatureVersion": "v4",
  "_globalCacheControl": null,
  "_encryption": "AES256",
  "_s3Client": {
    "config": {
      "credentials": {
        "expired": false,
        "expireTime": null,
        "accessKeyId": "xxx",
        "profile": "default",
        "disableAssumeRole": true
      },
...

and when I log s3Adapter after deploying to Heroku (notice null for config):

{
  "_region": "xxx",
  "_bucket": "xxx",
  "_bucketPrefix": "",
  "_directAccess": xxx,
  "_baseUrl": null,
  "_baseUrlDirect": false,
  "_signatureVersion": "v4",
  "_globalCacheControl": null,
  "_encryption": "AES256",
  "_s3Client": {
    "config": {
      "credentials": null,
  ...

I've tried not using dotenv when deployed to Heroku as I thought that might mess with the config, but that didn't yield any results. I was unable to track down this specific problem for some time since the only error log I was getting a 130 couldn't save file error. I finally got a different error message after some tinkering:

{"message":"Missing credentials in config","stack":"CredentialsError: Missing credentials in config\n at ClientRequest.<anonymous> (/app/node_modules/aws-sdk/lib/http/node.js:83:34)\n at Object.onceWrapper (events.js:313:30)\n at emitNone (events.js:106:13)\n at ClientRequest.emit (events.js:208:7)\n at Socket.emitTimeout (_http_client.js:708:34)\n at Object.onceWrapper (events.js:313:30)\n at emitNone (events.js:106:13)\n at Socket.emit (events.js:208:7)\n at Socket._onTimeout (net.js:407:8)\n at ontimeout (timers.js:475:11)","code":"CredentialsError","name":"CredentialsError","time":"2018-01-21T08:12:13.062Z","retryable":true,"originalError":{"message":"Could not load credentials from any providers","code":"CredentialsError","time":"2018-01-21T08:12:13.062Z","retryable":true,"originalError":{"message":"Connection timed out after 1000ms","code":"TimeoutError","time":"2018-01-21T08:12:13.062Z","retryable":true}},"level":"error"}

This error led me to log the s3 adapter object to find null credentials. For all I know, this could be intended behavior. Regardless, I haven't been able to get this adaptor working when deployed to Heroku.

Thanks!

Unable to delete the actual file from S3 via Parse.Cloud.httpRequest DELETE

I have successfully configured S3Adapter to upload files to my bucket on S3. It has been successful to use Parse-dashboard to upload a file and asscociate it with a File column called image on my table called item.

A beforeDelete Cloud function is defined as below to delete the uploaded file before a record can be deleted from database.

Parse.Cloud.beforeDelete("item", async (request) => {
	console.log("*** beforeDelete on item");
	
  // Checks if "image" has a value
  if (request.object.has("image")) {

    var file = request.object.get("image");
    var fileName = file.name();
    console.log(file.name());   // This is correct
	console.log(file.url());    // This is correct, too.
	
    const response = await Parse.Cloud.httpRequest({
      method: 'DELETE',
      url: file.url(),
      headers: {
        "X-Parse-Application-Id": "APPLICATION_ID",
        "X-Parse-Master-Key" : "a-super-secret-master-key"
      },
      success: function(httpResponse) {
        console.log('*** Deleted the file associated with the item job successfully.');
        //return httpResponse;
      },
      error: function(httpResponse) {
        console.error('*** Delete failed with response code ' + httpResponse.status + ':' + httpResponse.text);
        //return httpResponse;
      }
    });
	
	console.log("---" + response)
  } else {
    console.log('*** item object to be deleted does not have an associated image (File). No image to be deleted.');
    //return 0;
  }
});

But when I tried to delete an item from my item table, the associated file was not deleted from S3. The ParseServer threw out the following error which I assume is from AWS S3.
<?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>63FC6B3711D87F65</RequestId><HostId>/A3Uoq26QT0dHWnWBttPZJg53fn6+zIVI4rncxeNurIu9fVIbxOJn0GxgvrGVpt+pfZXkopYdQY=</HostId></Error>

What is wrong with my code?

What is the best practise to delete an uploaded file from S3 via ParseServer?

S3 FileAdapter fails when bucket is located in Frankfurt

When setting up the s3 bucket with region Frankfurt (eu-central-1) the access from parse server / S3Adapter does not work. I'm using the latest version of the parse-server-s3-adapter 1.0.1. I'm testing it on my local parse installation (v 2.2.4). Without using the s3-adapter, the file is successfully saved.
This issue has already been reported on parse-community/parse-server#1176 by another user.

This is how my configuration looks like:

var S3Adapter = require('parse-server-s3-adapter');

var api = new ParseServer({
  cloud: process.env.CLOUD_CODE_MAIN || __dirname + '/cloud/main.js',
  // ...
  filesAdapter: new S3Adapter(
      "---",
      "---",
      "bucket",
      {
          region: 'eu-central-1'
          //bucketPrefix: '',
          //directAccess: true
      }
  )
});

Error uploading file to digital ocean spaces

I was trying to upload a file to digital ocean spaces and every time it displays the following error, after several attempts and logs, I ended up discovering that digital ocean spaces does not accept the x-amz-tagging header, so to get around this problem I added a option to address this problem.

error: Error creating a file: {"code":"InvalidArgument","region":null,"time":"2021-01-24T03:40:07.704Z","requestId":"tx0000000000000c423031d-00600cec17-47cdf09-nyc3b","statusCode":400,"retryable":false,"retryDelay":41.84567606709386,"stack":"InvalidArgument: null\n at Request.extractError (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/services/s3.js:837:35)\n at Request.callListeners (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:106:20)\n at Request.emit (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:688:14)\n at Request.transition (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.<anonymous> (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.<anonymous> (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:690:12)\n at Request.callListeners (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:116:18)\n at Request.emit (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:78:10)\n at Request.emit (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:688:14)\n at Request.transition (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:22:10)\n at AcceptorStateMachine.runTo (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/state_machine.js:14:12)\n at /home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/state_machine.js:26:10\n at Request.<anonymous> (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:38:9)\n at Request.<anonymous> (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/request.js:690:12)\n at Request.callListeners (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:116:18)\n at callNextListener (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/sequential_executor.js:96:12)\n at IncomingMessage.onEnd (/home/onemenu/one-parse/node_modules/parse-server/node_modules/aws-sdk/lib/event_listeners.js:313:13)\n at IncomingMessage.emit (events.js:203:15)\n at IncomingMessage.EventEmitter.emit (domain.js:466:23)"}

baseUrl (cloudfront URL) isn't used as PFFile's url

Hi,

I've recently started using Cloudfront service to make things faster.
Here's my config

filesAdapter: new S3Adapter(
"someid",
"somekey",
"elasticbeanstalk-ap-northeast-1-xxxxx",
{region: "ap-northeast-1",
bucketPrefix: "images/",
directAccess: true,
baseUrl: 'http://myxxxxx.cloudfront.net',
baseUrlDirect: true
}
),

However, when I save PFFile and later read its URL, it's formatted as the S3 URL, e.g.
http://myparseserver-env.ap-northeast-1.elasticbeanstalk.com/parse/files/xxxx.jpeg

This makes me wonder if CloudFront baseUrl was actually used......
I was expecting something like
ttp://myxxxxx.cloudfront.net/xxxx.jpeg

Thank you very much for your tips!

Pull Request - Make baseUrl parameter function

#106

This is my pr. I would like to answer Davi's question here:

I’d also use typeof this._baseUrl === 'function' instead of this._baseUrl instanceof Function.

I used Function because in here you used Function instead of 'function'.

This is my first PR ever. I'm excited.

Adapter does't work with ap-east-1 (HK)

There is the issue with adapter: if I'm using with it ap-east-1 (HK) region - I can upload image without problems, however when I get wrong URL to access it.

The url does not specify region. Perfectly works with other regions

Customize ACL

Issue

The adapter parameter directAccess currently influences

  • the ACL when writing a file to the S3 bucket

    if (this._directAccess) {
    params.ACL = 'public-read';
    }

  • the file path when composing the URL of a requested Parse.File

    if (this._directAccess) {
    if (this._baseUrl && this._baseUrlDirect) {
    return `${this._baseUrl}/${fileName}`;
    } if (this._baseUrl) {
    return `${this._baseUrl}/${this._bucketPrefix + fileName}`;
    }
    return `https://${this._bucket}.s3.amazonaws.com/${this._bucketPrefix + fileName}`;
    }
    return (`${config.mount}/files/${config.applicationId}/${fileName}`);

This parameter therefore controls two aspects that have nothing to do with each other.

Specifically, the ACL of a file put into the S3 bucket depends on the ACL management in that specific architecture. For example, regardless of the ACL of a file, the bucket itself has an ACL, the AWS IAM users and roles have ACLs.

On the other hand, when using a CDN, the file URL needs to have the base URL of the CDN. But without setting directAccess: true, the base URL cannot be set and the URL always starts with the parse server mount path:

return (`${config.mount}/files/${config.applicationId}/${fileName}`);

Solutions

There should be separate adapter parameters for

  • the file ACL to set when writing a file
  • the base URL to return when composing the file URL

The easiest way would be to simply add an optional acl parameter that allows to define the ACL to be set when writing the file. Let it default to public-read and it won't even be a breaking change.

adapter not saving files in s3 bucket but in the mongo db.

Here's my adapter settings

const S3Adapter = require('@parse/s3-files-adapter');
const s3Adapter = new S3Adapter({
  bucket: "my_bucket",
  directAccess: true,
  baseUrl: " https://buckeet.s3.us-west-004.backblazeb2.com", 
  baseUrlDirect: false,
  signatureVersion: 'v4',
  globalCacheControl: 'public, max-age=86400',
  region: 'us-west-004',
  s3overrides: {
    endpoint: "s3.us-west-004.backblazeb2.com", 
    accessKeyId: "xxxxxxxxxxxxxxxxxxxxxxx",
    secretAccessKey: "xxxxxxxxxxxxxxxxxxxxxxx"
  },
});

and then

config = {
appName: ...
.. 
  filesAdapter:s3Adapter
}

but it doesn't save in my s3 bucket.

Returning wrong file name

Assuming the following configuration:

new ParseServer({
    preserveFileName: true,
    filesAdapter: {
        module: '@parse/s3-files-adapter'.
        options: {
            directAccess: false,
            validateFilename(filename: string) { return null }
        }
})

And that it has a file saved with the following name: folder/bar-bar.jpg.

When I retrieve the file URL, the getFileLocation function returns: https://<MOUNT_PATH>/files/<APP_ID>/folder/bar-bar.jpg. However, this URL causes the parse-server to return an error: HTTP 403 - {"error":"unauthorized"}

The correct URL that should be returned is: https://<MOUNT_PATH>/files/<APP_ID>/folder%2Fbar-bar.jpg

I would like to know how you would like to solve this problem? Since the response may behave differently when directAccess is true or false.

Note: I encountered this problem while doing some tests with my PR #117.

[SUGGESTION] Organize uploaded files in different folders

Hey all,

This is not an issue but a suggestion. If people seem to like it, I will implement it and PR.

I have the plugin installed and working perfectly. And the baseUrl set so my files get served through AWS CloudFront instead of through S3 directly (speed is crucial in my case).

However, all the files are getting uploaded to the same (root) directly. For example:

my-s3-bucket
 - file1.png
 - file2.jpeg
 - file3.pdf

I think it would be much better if everything was saved in its own folder, based on which Parse Class saved it. For example, assuming a class Car and a class House:

my-s3-bucket
 - Car <-- folder
   - car1.png
   - car2.jpeg
   - specs.pdf
- house <-- folder
   - house1.png
   - house2.png
   - outline.pdf

It could be broken even further based on which column saved them. For example, assuming the class Car has a column TopView and SideView:

my-s3-bucket
 - Car <-- folder
   - TopView <-- subfolder
       - car1.png
       - car2.jpeg
       - specs.pdf
   - SideView <-- subfolder
       - car1.png
       - car2.jpeg
       - specs.pdf

I personally think it will make files in the bucket much easier to navigate, especially when we have hundreds or thousands of them.

Let me know what you think!

commit bb933cc breaks adapter for me

git bisect shows that this commit breaks the adapter for me: bb933cc

@dpoetzsch not sure what the commit is supposed to do or why my config would break it.

Here's my config for the adapter...

s3AdapterOptions: {
    bucket: 'mybucket',
    bucketPrefix: 'path/',
    directAccess: true,
    baseUrl: 'https://my.cloudfront.dist',
    globalCacheControl: 'public, max-age=31536000',
  }),
{ XMLParserError: Unexpected close tag
Line: 3
Column: 7
Char: >
    at error (/Users/arthur/code/parse-server-s3-adapter/node_modules/sax/lib/sax.js:666:10)
    at strictFail (/Users/arthur/code/parse-server-s3-adapter/node_modules/sax/lib/sax.js:692:7)
    at closeTag (/Users/arthur/code/parse-server-s3-adapter/node_modules/sax/lib/sax.js:885:9)
    at Object.write (/Users/arthur/code/parse-server-s3-adapter/node_modules/sax/lib/sax.js:1444:13)
    at Parser.exports.Parser.Parser.parseString (/Users/arthur/code/parse-server-s3-adapter/node_modules/xml2js/lib/xml2js.js:502:31)
    at Parser.bind [as parseString] (/Users/arthur/code/parse-server-s3-adapter/node_modules/xml2js/lib/xml2js.js:7:59)
    at NodeXmlParser.parse (/Users/arthur/code/parse-server-s3-adapter/node_modules/aws-sdk/lib/xml/node_parser.js:30:10)
    at Request.extractError (/Users/arthur/code/parse-server-s3-adapter/node_modules/aws-sdk/lib/services/s3.js:524:39)
    at Request.callListeners (/Users/arthur/code/parse-server-s3-adapter/node_modules/aws-sdk/lib/sequential_executor.js:105:20)
    at Request.emit (/Users/arthur/code/parse-server-s3-adapter/node_modules/aws-sdk/lib/sequential_executor.js:77:10)
  message: 'Unexpected close tag\nLine: 3\nColumn: 7\nChar: >',
  code: 'XMLParserError',
  retryable: true,
  time: 2016-12-06T19:06:31.241Z,
  originalError: 
   { message: 'Unexpected close tag\nLine: 3\nColumn: 7\nChar: >',
     code: 'XMLParserError',
     retryable: true,
     time: 2016-12-06T19:06:31.239Z },
  statusCode: 403 }

S3 filename generated by ``generateKey`` option is ignored by Parse Server

If you use the generateKey to customize a filename before uploading a file to S3, the newly generated filename is then ignored and not properly propagated back to the Parse Server to be stored in the fields referencing that file.

The only way to make the generateKey option work is to turn on ParseServer preserveFilenames option that will leave all the logic of generating proper file names on the client.

See parse-community/parse-server#6518

The proposal would be to modify validateFilename in file adapters to not return error but rather a modified filename. That can in turn reuse the generateKey option.

Unable to upload files - Access denied

My setup:

const api = new ParseServer({
    ...
    filesAdapter: {
        module: "@parse/s3-files-adapter",
        options: {
            bucket: process.env.S3_BUCKET,
            region: process.env.S3_REGION,
            generateKey: null
        }
    }
})

Using heroku env:

S3_BUCKET=<redacted>
S3_REGION=<redacted>
S3_ACCESS_KEY=<redacted>
S3_SECRET_KEY=<redacted>

My policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": "arn:aws:s3:::<bucket-name>/*"
        }
    ]
}
  1. I'm seeing Passing AWS credentials to this adapter is now DEPRECATED at server start, though I thought it was OK to use the heroku config vars?

  2. I keep getting access denied when trying to upload. I followed all the steps here to configure my bucket and access policy.

Parse error: Access Denied {"code":130,"stack":"Error: Access Denied\n    at createHandler (<path-to-parse>/parse-server/lib/Routers/FilesRouter.js:202:12)\n    at processTicksAndRejections (internal/process/task_queues.js:97:5)"}

I've been combing through all the docs on this repo, through all the closed issues, and nothing seems to work. What am I missing here?

Use AWS SDK & CLI standard configuration

Currently the adapter requires that a key and secret be provided either by an option or a env var.

The AWS SDK & CLI can be configured using a standard protocol for obtaining credentials (http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#config-settings-and-precedence).

Using the default protocol makes it easy to securely handle AWS credentials and eases configuration overhead. Most importantly, using the standard provider would allow the s3 adapter to follow the AWS recommendation (http://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#use-roles-with-ec2) to use ec2 roles to provide credentials.

The adapter both requires explicitly setting credentials and uses non-standard env var names.

I'm proposing to make the AWS security credentials optional instead of required and changing the read me to indicate that the adapter will attempt to pickup credentials from the standard locations if not explicitly provided.

In the event that a connection to s3 is attempted without credentials being found, the aws-sdk will provide a sane error like:

message: 'Could not load credentials from any providers',
     code: 'CredentialsError',
     time: 2016-07-25T15:29:07.566Z,

I'm cooking up the change for this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.