Coder Social home page Coder Social logo

animir / node-rate-limiter-flexible Goto Github PK

View Code? Open in Web Editor NEW
2.9K 24.0 154.0 1.21 MB

Atomic counters and rate limiting tools. Limit resource access at any scale.

License: ISC License

JavaScript 100.00%
security rate limit ratelimter bruteforce throttle koa express hapi mysql nestjs rate-limiting dynamodb postgresql redis prisma

node-rate-limiter-flexible's Introduction

npm version npm node version deno version

Logo

node-rate-limiter-flexible

rate-limiter-flexible counts and limits number of actions by key and protects from DDoS and brute force attacks at any scale.

It works with Redis, Prisma, DynamoDB, process Memory, Cluster or PM2, Memcached, MongoDB, MySQL, _PostgreSQL.

Memory limiter also works in browser.

Atomic increments. All operations in memory or distributed environment use atomic increments against race conditions.

Fast. Average request takes 0.7ms in Cluster and 2.5ms in Distributed application. See benchmarks.

Flexible. Combine limiters, block key for some duration, delay actions, manage failover with insurance options, configure smart key blocking in memory and many others.

Ready for growth. It provides unified API for all limiters. Whenever your application grows, it is ready. Prepare your limiters in minutes.

Friendly. No matter which node package you prefer: redis or ioredis, sequelize/typeorm or knex, memcached, native driver or mongoose. It works with all of them.

In memory blocks. Avoid extra requests to store with inMemoryBlockOnConsumed.

Allow traffic bursts with BurstyRateLimiter.

Deno compatible See this example

It uses fixed window as it is much faster than rolling window. See comparative benchmarks with other libraries here

Installation

npm i --save rate-limiter-flexible

yarn add rate-limiter-flexible

Import

// CommonJS
const { RateLimiterMemory } = require("rate-limiter-flexible");

// or

// ECMAScript 
import { RateLimiterMemory } from "rate-limiter-flexible";
// or
import RateLimiterMemory from "rate-limiter-flexible/lib/RateLimiterMemory.js";

Basic Example

Points can be consumed by IP address, user ID, authorisation token, API route or any other string.

const opts = {
  points: 6, // 6 points
  duration: 1, // Per second
};

const rateLimiter = new RateLimiterMemory(opts);

rateLimiter.consume(remoteAddress, 2) // consume 2 points
    .then((rateLimiterRes) => {
      // 2 points consumed
    })
    .catch((rateLimiterRes) => {
      // Not enough points to consume
    });

RateLimiterRes object

Both Promise resolve and reject return object of RateLimiterRes class if there is no any error. Object attributes:

RateLimiterRes = {
    msBeforeNext: 250, // Number of milliseconds before next action can be done
    remainingPoints: 0, // Number of remaining points in current duration 
    consumedPoints: 5, // Number of consumed points in current duration 
    isFirstInDuration: false, // action is first in current duration 
}

You may want to set next HTTP headers to response:

const headers = {
  "Retry-After": rateLimiterRes.msBeforeNext / 1000,
  "X-RateLimit-Limit": opts.points,
  "X-RateLimit-Remaining": rateLimiterRes.remainingPoints,
  "X-RateLimit-Reset": new Date(Date.now() + rateLimiterRes.msBeforeNext)
}

Advantages:

Full documentation is on Wiki

Middlewares, plugins and other packages

Some copy/paste examples on Wiki:

Migration from other packages

  • express-brute Bonus: race conditions fixed, prod deps removed
  • limiter Bonus: multi-server support, respects queue order, native promises

Docs and Examples

Changelog

See releases for detailed changelog.

Basic Options

  • points

    Default: 4

    Maximum number of points can be consumed over duration

  • duration

    Default: 1

    Number of seconds before consumed points are reset.

    Never reset points, if duration is set to 0.

  • storeClient

    Required for store limiters

    Have to be redis, ioredis, memcached, mongodb, pg, mysql2, mysql or any other related pool or connection.

Other options on Wiki:

Smooth out traffic picks:

Specific:

API

Read detailed description on Wiki.

Benchmark

Average latency during test pure NodeJS endpoint in cluster of 4 workers with everything set up on one server.

1000 concurrent clients with maximum 2000 requests per sec during 30 seconds.

1. Memory     0.34 ms
2. Cluster    0.69 ms
3. Redis      2.45 ms
4. Memcached  3.89 ms
5. Mongo      4.75 ms

500 concurrent clients with maximum 1000 req per sec during 30 seconds

6. PostgreSQL 7.48 ms (with connection pool max 100)
7. MySQL     14.59 ms (with connection pool 100)

Note, you can speed up limiters with inMemoryBlockOnConsumed option.

Contribution

Appreciated, feel free!

Make sure you've launched npm run eslint before creating PR, all errors have to be fixed.

You can try to run npm run eslint-fix to fix some issues.

Any new limiter with storage have to be extended from RateLimiterStoreAbstract. It has to implement 4 methods:

  • _getRateLimiterRes parses raw data from store to RateLimiterRes object.

  • _upsert may be atomic or non-atomic upsert (increment). It inserts or updates value by key and returns raw data. If it doesn't make atomic upsert (increment), the class should be suffixed with NonAtomic, e.g. RateLimiterRedisNonAtomic.

    It must support forceExpire mode to overwrite key expiration time.

  • _get returns raw data by key or null if there is no key.

  • _delete deletes all key related data and returns true on deleted, false if key is not found.

All other methods depends on store. See RateLimiterRedis or RateLimiterPostgres for example.

Note: all changes should be covered by tests.

node-rate-limiter-flexible's People

Contributors

adrianvlupu avatar animir avatar animirr avatar backflip avatar cha0s avatar daniel-97 avatar dmozgovoi avatar drye avatar evan361425 avatar matomesc avatar michaeldzjap avatar minikraken-team avatar mkxml avatar mriedem avatar mroderick avatar o-ali avatar omers-frontegg avatar ondronr avatar paulsc54 avatar pavittarx avatar raelcun avatar rijkvanzanten avatar roggervalf avatar seromenho avatar shlavik avatar svsool avatar tero avatar vdiez avatar vinibeloni avatar waldyrious avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

node-rate-limiter-flexible's Issues

What is right way to get current number of points by key?

Hello,

What is right way to get current number of points by key?
In my opinion, would be great to have a special getter for that.
I want to send remained points via special HTTP header.
At the moment I'm using RateLimiterMemory.

Good job and thank you.

Configurable IP Address

Hello,

So it seems in the library code that we key only off req.ip. There are some cases where that value can be spoofed if you are running Express behind a proxy via the X-Forwarded-For header if trust-proxy is set to true for the app. I was wondering if it is possible for a consumer of this library to pass an ip value in, and fall back to req.ip if nothing is passed in.

LOC: https://github.com/animir/node-rate-limiter-flexible/blob/master/lib/ExpressBruteFlexible.js#L148

RateLimiterRes object attributes look different from API documentation

Hi,

So when I print out the object returned from rateLimiterRedis.consume I get:

{
"_remainingPoints": 0,
"_msBeforeNext": 290093,
"_consumedPoints": 11,
"_isFirstInDuration": false
}

i.e. the attributes have a trailing underscore. Is this right? cause when I checked the api documentation,
the attributes do not have a trailing underscore.

From the api documentaion:
RateLimiterRes object
Both Promise resolve and reject returns object of RateLimiterRes class if there is no any error. Object attributes:

RateLimiterRes = {
msBeforeNext: 250, // Number of milliseconds before next action can be done
remainingPoints: 0, // Number of remaining points in current duration
consumedPoints: 5, // Number of consumed points in current duration
isFirstInDuration: false, // action is first in current duration
}

error: member access into incomplete type 'RSA' (aka 'rsa_st')

When installing this module I have a node gyp issue

$ npm install --save rate-limiter-flexible

> [email protected] install /node_modules/ursa
> node-gyp rebuild

  CXX(target) Release/obj.target/ursaNative/src/ursaNative.o
../src/ursaNative.cc:389:13: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    obj->rsa->n = BN_bin2bn(data_n, n_length, NULL);
            ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:390:13: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    obj->rsa->e = BN_bin2bn(data_e, e_length, NULL);
            ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:407:35: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    if ((obj == NULL) || (obj->rsa->d != NULL)) {
                                  ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:536:34: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    bignumToBuffer(args, obj->rsa->e);
                                 ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:552:34: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    bignumToBuffer(args, obj->rsa->d);
                                 ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:567:34: error: member access into incomplete type 'RSA' (aka 'rsa_st')
    bignumToBuffer(args, obj->rsa->n);
                                 ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1218:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->n = modulus;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1219:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->e = exponent;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1220:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->p = p;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1221:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->q = q;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1222:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->dmp1 = dp;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1223:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->dmq1 = dq;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1224:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->iqmp = inverseQ;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1225:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->d = d;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1270:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->n = modulus;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
../src/ursaNative.cc:1271:17: error: member access into incomplete type 'RSA' (aka 'rsa_st')
        obj->rsa->e = exponent;
                ^
/Users/loretoparisi/.node-gyp/10.1.0/include/node/openssl/ossl_typ.h:110:16: note: forward declaration of 'rsa_st'
typedef struct rsa_st RSA;
               ^
16 errors generated.
make: *** [Release/obj.target/ursaNative/src/ursaNative.o] Error 1

What is this ursaNative dependency?

Thank you.

Memory limiter: dump and restore functionality

I was thinking if we could add a dump/restore functionality to this module.

Due to the nature of nodejs process, if it goes down we loose everything stored in the in-memory db.

What are your thoughts?

Init rate-limiter with mongoose connection coming from external file

Hi!

I'm trying to use this package to limit requests number but I'm having some troubles with Mongoose connections.

The point is that I init the DB connection through a file called db.js wich is then imported in my app.js.

The fact is that the DB connection is available only after the app has been initialized but the limiter-middleware needs the connection during the app initialization.

As you can see in the following files I've stored the DB connection in a variable called _db wich is readable with the getDb() method.

But If I call the getDb() method inside the rate-limiter.js file I get an error because DB connection is not initialized yet.

I don't want to init a second connection inside ip-limiter.js file.

Have you some suggestions on how I can acheive my goal? :)

Any help would be very appreciated

--

"mongoose": "^5.5.11",
"rate-limiter-flexible": "^1.0.2"

--

app.js

// Import core packeges
const path = require('path');

// Import third-party packages
const express = require('express');

// Import custom packages
const CONFIG = require('./config/config');
const initDb = require('./db').initDb;
const getDb = require('./db').getDb;

// Import middlewares and routes
... import some middlewares and routes
const routeIPLimiterMiddleware = require('./middlewares/limiters/ip-limiter');


// Init & config express
const app = express();

// BEGIN - Middlewares and routes
... some middlewares and routes
app.use(routeIPLimiterMiddleware); // Limit request number per seconds per IP
.. some other middlewares and routes
// END - Middlewares and routes

// Connect to db
initDb(err => {
  if (err) {
    console.warn('initDb error', err);
  }
  app.listen(CONFIG.PORT, err => {
    if (err) {
      throw err;
    }
    console.log(`API Up and running on port ${CONFIG.PORT}`);
  });
});

db.js

const assert = require('assert');
const mongoose = require('mongoose');
const CONFIG = require('./config/config');

let _db;

// Get current db connection
module.exports.getDb = () => {
  assert.ok(_db, 'Db has not been initialized. Please called init first.');
  return _db;
}

// Init database connection
module.exports.initDb = (callback) => {
  if (_db) {
    return callback(null, _db);
  }

  mongoose.connect(CONFIG.DATABASE.URI, CONFIG.DATABASE.OPTIONS)
    .then(db => {
      _db = db;
      return callback(null, _db);
    })
    .catch(err => {
      return callback(err);
    });
}

ip-limiter.js

const getDb = require('../../db').getDb;
const { RateLimiterMongo  } = require('rate-limiter-flexible');
const CONFIG = require('../../config/config');

const mongoConn = getDb();

const ipRateLimiterOptions = {
  storeClient: mongoConn,
  points: 1, // Maximum number of points can be consumed over duration
  duration: 2, // Number of seconds before consumed points are reset
  dbName: CONFIG.DATABASE.NAME,
  keyPrefix: 'rl-100'
}

const rateLimiter = new RateLimiterMongo(ipRateLimiterOptions);

/**
 * Limit requests number per second per IP
 */
module.exports = (req, res, next) => {
  rateLimiter.consume(req.ip)
    .then(() => {
      next();
    })
    .catch(() => {
      res.status(429).json({
        status: false,
        message: 'Too many requests'
      });
    });
}

Automatically revert back to RateLimiterRedis after redis connection recovers

Is there a way to automatically revert back to RateLimiterRedis after redis connection recovers from going down?

The code below will continue to use RateLimiterMemory even after the redis connection recovers.

const redis = require('redis');
const { RateLimiterRedis, RedisLimiterMemory } = require('rate-limiter-flexible');

const redisClient = redis.createClient({
  enable_offline_queue: false,
  retry_strategy: function (options) {
    if (options.attempt > 3) { // Try to reconnect 3 times
      // This error is caught by limiter and then insurance limiter is used in this case
      return new Error('Retry time exhausted');
    }

    return 100; // Not longer than 100 * 3 = 300 ms
  }
});

const rateLimiterMemory = new RateLimiterMemory({
  points: 1,
  duration: 1
});

const rateLimiter = new RateLimiterRedis({
  redis: redisClient,
  points: 5,
  duration: 1,
  insuranceLimiter: rateLimiterMemory
});

Replace use of whitelist and blacklist

Terminologies such as whitelist and blacklist have historical context, making their meaning unclear. A substitution for allowlist and denylist would bring more direct meaning. The same goes for isWhite and isBlack, which can be replaced by isAllowed and isDenied.

I am aware that this would represent a breaking change, but I still consider it valid. Meanwhile it is possible to correct these terminologies internally on the library.

Add ability to define points by key dynamically

It would be great to be able to allow further customization of points allowed instead of a sweeping value across all keys. Ex: Allow IP 1 to have 60 points over 60 seconds, but allow IP 2 to only have 30 points over 60 seconds, and allow IP 3 to have a dynamic value (like 2x IP1, or some other call back to have it externally determined), with some sort of default points if not specified. It would also be beneficial to store that in something other than memcache (redis, for example) which would allow users of the library to update/specify the points for a key after it has been deployed so it can be updated dynamically as the situations arise just by updating the distributed cache.

I imagine this would slow down each request since it would probably be 2 calls to redis, but as an opt-in capability I would be happy to accept that.

Feature: BlackAndWhiteListWrapper

@OsoianMarcel suggested to implement black/white lists options here #4

We had decided during discussion, that black/white keys checks should be implemented as separate wrapper for the sake of clearness.

Here is example

const rl = new BlackAndWhiteListWrapper({
  limiter: anyRateLimiter,
  blackList: [],
  whiteList: [],
  isBlack: () => {},
  isWhite: () => {},
});

Burst requests using leaky bucket size

Is there any way to implement a leaky bucket algorithm with bucket size in order to allow burst requests?

For example a bucket of size 10 with 2 req/s leak rate allows up to 10 burst requests which fill the bucket and then only 2 req/s while the bucket remains full. If there are no requests, then the bucket empties at the 2 req/s leak rate, allowing another 10 burst requests after 5 seconds.

This is the implementation of the bottleneck library: https://github.com/SGrondin/bottleneck#increase-interval

msBeforeNext value that comes from .get() is misleading

I dont know if its intentional but, msBeforeNext variable (at least in mysql version) always only looks at the duratin value (not blockDuration). Which makes "Login endpoint protection" example bugged. Because value is first checked with .get() a if there are enough consumed points response is always 429. So, basically examples block the requests according to 'duration' variable NOT 'blockDuration' variable. I am not perfectly familiar with the api yet but it should either check with consume() instead of get() should be fixed (again, not sure if its intentional). I think get() can return a flag if key is broken or not or msBeforeNext should also consider blockDuration.

duration vs blockDuration

I'm not understanding the relationship between duration and blockDuration. Using the example "Login endpoint protection" in the wiki if I set my RateLimiterRedis configuration as:

const limiterConsecutiveFailsByUsernameAndIP = new RateLimiterRedis({
  storeClient: redisClient,
  keyPrefix: 'login_fail_consecutive_username_and_ip',
  points: 5,
  duration: 60 * 60 * 24 * 90, // Store number for 90 days since first fail
  blockDuration: 30
});

after 5 failed login attempts I'm locked out for 90 days-- the calculated retrySec is ~ 7776000 and any subsequent login attempts are blocked (until I delete my redis keys). I'm purposely setting the blockDuration to 30 (i.e. 30 seconds) for testing. Once it's working as expected I'll change to 1 hour (60 * 60) or something reasonable.

How do these 2 settings work together? For testing and understanding how this works I've set duration to 60 and blockDuration to 30, but only duration seems to matter and I reset 60 seconds after the first failed attempt. Once max points have been consumed in duration that appears to be it. How does blockDuration matter at all?

Cache data being stored in multiple DB in redis

Observed that rate limiter is caching points consumed/remaining in redis in multiple database. There are 16 DB available in Redis and data is being cached into selected DB when data is being upsert to Redis. I would suggest their should be option to pass in which DB index should be used in Redis to maintain cache

Implement RateLimiterFirestore.

Any thoughts on implementing the storage class for Firestore / Firebase?

This would be great for the firebase cloud functions :) since firestore is part of the package.

Add JSON method signatures to RateLimiterRes type

Would it be possible to add the toString and toJSON method signatures to the type declaration for RateLimiterRes? At the moment TypeScript complains about these two properties not being available on an instance of RateLimiterRes.

Project looks very interesting, but why use limiting at the app instead of reverse proxy like nginx?

Project looks great, thanks for sharing. What advantages and disadvantages would there be to using limiting at app level and not before the app within a reverse proxy? My view is that the app is still being caught up processing the rate limiting of course and would prefer to keep this functionality in front of actual app servers. Or would this packages be used within a nodes.js reverse proxy before the application?

Invalid SQL Syntax

Hey there,

I recently started using your package and it works pretty well. However, after the server has been on for about 5โ€“10 mins, it crashes due to invalid SQL syntax. It should be noted I'm using Sequelize.

Executing (default): DELETE FROM dashboard.rateLimiter WHERE expire < ?

My rateLimiter middleware function is shown below;

import { RateLimiterMySQL } from 'rate-limiter-flexible';

import { sequelize } from '../models';

const rateLimiter = new RateLimiterMySQL(
  {
    storeClient: sequelize,
    tableName: 'rateLimiter',
    points: 10,
    duration: 2,
  },
  err => {
    if (err) throw new Error(`Unable to connect to start Rate Limiter \n ${err}`);
  }
);

export default (req, res, next) => {
  rateLimiter
    .consume(req.connection.remoteAddress, 2)
    .then(() => {
      next();
    })
    .catch(() => {
      res.status(429).send('You are sending too many requests.');
    });
};

Hopefully you can provide a solution. :)

Can I use one Redis for multiple rate-limiters?

I have one instance of Redis, and multiple API's from different endpoints. Can I use the same Redis instance for differente backends?

This is just a starting point, don't pretend that it's a production thing

Express middleware: filter out static routes

I'm serving with express middleware both apis and static files. I'm using the RLWrapperBlackAndWhite for filter out localhost / 127.0.0.1 calls:

const rateLimiterMemory = new RateLimiterMemory({
                  points: 11,
                duration: 1,
                execEvenly: false
            });
            const limiterWrapped = new RLWrapperBlackAndWhite({
                limiter: rateLimiterMemory,
                whiteList: ['127.0.0.1'],
                runActionAnyway: false,
            });
            const rateLimiterMiddleware = (req, res, next) => {
                limiterWrapped.consume(req.connection.remoteAddress)
                    .then(() => {
                        next();
                    })
                    .catch(_ => {
                        var errorMessage = APIHelper.createErrorMessage(429, '', '', 'Too Many Requests');
                        res.status(429).send(errorMessage);
                    });
            };
            self.app.use(rateLimiterMiddleware);

I'm serving static files as well and I often get the error

{
	message: {
		header: {
			status_code: 429
		},
		body: {
			error: {
				code: 429,
				status: 429,
				hint: "",
				description: "",
				message: "Too Many Requests"
			}
		}
	}
}

back from the limiter on these static routes

 self.app.use(express.static(path.join(__dirname, 'public')));
        self._options.staticRoutes.forEach(route=> {
            self.app.use(BASEPATH+route,express.static(path.join(__dirname, 'public')));
        });

How can I filter out the static routes from the rate limiter middleware?

Maximum call stack size exceeded on unavailable memcached

When memcached is unavailable, I get "Maximum call stack size exceeded" Error on line https://github.com/animir/node-rate-limiter-flexible/blob/master/lib/RateLimiterMemcache.js#L31
called from https://github.com/animir/node-rate-limiter-flexible/blob/master/lib/RateLimiterMemcache.js#L59 .

attemptNumber is never increased, which results in never ending loop. I believe attemptNumber should be increased on this line https://github.com/animir/node-rate-limiter-flexible/blob/master/lib/RateLimiterMemcache.js#L57

Reproduction code, memcached must be unavailable

'use strict';

const {RateLimiterMemcache} = require('rate-limiter-flexible');
const Memcached = require('memcached');

const memcacheRateLimiterOpts = {
    points: 100, // 100 same events
    duration: 3600, // per 1 hour
    storeClient: new Memcached('127.0.0.1:12345', {timeout: 500, failures: 2, retries: 2})
};

const rateLimiter = new RateLimiterMemcache(memcacheRateLimiterOpts);

async function foobar() {
    await rateLimiter.consume('aaa');
}

foobar();

Possibility to change the key after you have initialised the rate limiter

To reuse the same code for different endpoints it would be useful to change the key after you have initialised the rate limiter.

Current code:

const {RateLimiterPostgres} = require('rate-limiter-flexible');

const opts = {
  points: 6,
  duration: 1,
  storeClient: sequelize,
  tableName: 'RateLimit',
  keyPrefix: 'UsernameLookup'
};

const ready = (err) => {
  if (err) {
    console.error(err);
  } else {
    console.log('Created/Found table needed for rate limiting');
  }
};

const rateLimiter = new RateLimiterPostgres(opts, ready);

exports.rateLimiting = (req, res, next) => {
  rateLimiter.consume(req.ip, 1)
    .then((rateLimiterRes) => {
      next();
    })
    .catch(() => {
      res.status(429).send('Too Many Requests');
    });
};

It would be useful to change the key from within the rateLimiting function, so the same export can be used for different endpoints if required.

Example:

const {RateLimiterPostgres} = require('rate-limiter-flexible');

const opts = {
  points: 6,
  duration: 1,
  storeClient: sequelize,
  tableName: 'RateLimit',
};

const ready = (err) => {
  if (err) {
    console.error(err);
  } else {
    console.log('Created/Found table needed for rate limiting');
  }
};

const rateLimiter = new RateLimiterPostgres(opts, ready);

exports.rateLimiting = (req, res, next) => {
  if (req.path.indexOf('username') > -1 {
    rateLimiter.key = 'UsernameLookup';
  } else  if (req.path.indexOf('passwordReset') > -1 {
    rateLimiter.key = 'PasswordReset';
  }

  rateLimiter.consume(req.ip, 1)
    .then((rateLimiterRes) => {
      next();
    })
    .catch(() => {
      res.status(429).send('Too Many Requests');
    });
};

Does usage data persist on store limiters, when delete rate limiter instance and create new one with the same `keyPrefix`?

If I create a new instance of the rate limiter, will it remember past usage info (specifically, if I pass keyPrefix), or will it begin fresh and think there has been no usage so far? E.g. say I have an instance that consumes 50 points, then it gets deleted. If I create a new instance before the duration expires, will it remember that 50 points were recently consumed?

Also, does each instance create a separate connection to redis?

Basically, my use-case requires different rate-limiting configurations for each client, so if I were to use this service, I would need a different rate limiter instance for each client. This is okay if I can create/delete the instances if the data persists in redis. But if the instances cannot be deleted, I worry if each instance will have their own redis connection (which can cause issues).

Wrapping a RateLimiterRedis with many RateLimiterQueues

We have a situation where we want to rate limit actions on individual record IDs. Rate limiter queue on top of a Redis limiter handles this job for us, but we need it to consume per record ID however current RateLimiterQueue calls consume with a hardcoded limiter name

Any chance to make _key parametric in queue constructor or a removeTokens overload with _key parameter so that we can dynamically use many granular queues over a single limiter instance?

Cannot read property 'collection' of undefined with RateLimiterMongo

I'm implementing rate limit functionality with RateLimiterMongo and _initCollection function throws TypeError: Cannot read property 'collection' of undefined. Changing this.client.db to this.client from rows 80 and 81 will fix the issue.

This is with MongoDB version 3.4.10.

Increase blockDuration on next failed attempt

Hello, great library btw. I just wanna if it is possible to increase the blockDuration dynamically on succeeding failures?

Given this config

const loginLimiter = new RateLimiterRedis({
  redis: redisClient,
  keyPrefix: 'login:',
  points: 5, // 5 attempts
  duration: 60 * 60 * 24, // within a day
  blockDuration: 60, // 1 min
});

I want to set the next blockDuration to 5mins if the next rate limit reset was already consumed.

Just to illustrate:

5 attempts > fail > wait for 1 min > 1 min has passed > 5 attempts > fail > wait for 5mins

Is it possible for this library?

Thanks for the help.

Implement SQLite3 limiter class

I've read the docs, and it seems you can pass through knex instances, but I didn't see if it supports SQLite3 - could you clarify please?

Repository changelog

Its difficult to use this library when there are frequent updates without a changelog telling us about new features, breaking changes, etc.

Could you please consider this for future releases?

RateLimiterUnion not exported

Hey,

I tried to use the RateLimiterUnion class in my app but it is not exported. I am wondering if/when I can expect that to be exported?

Share model with mongoose

Is there a way to share the model with mongoose? So I can create a page where I can list all current blocks (and easily remove them, if needed). Or are there any other ways to create this?

MySQL databases with a dash in the name cause a ER_PARSE_ERROR failure

I use suffixes like -dev, -test etc for my databases. When I try to initialise this library with a dbName of website-dev etc I get this error:

{ Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for 
the right syntax to use near '-dev' at line 1
    at Packet.asError (/Users/matt/dev/myapp/node_modules/mysql2/lib/packets/packet.js:714:13)
    at Query.Command.execute (/Users/matt/dev/myapp/node_modules/mysql2/lib/commands/command.js:28:22)
    at Connection.handlePacket (/Users/matt/dev/myapp/node_modules/mysql2/lib/connection.js:513:28)
    at PacketParser.onPacket (/Users/matt/dev/myapp/node_modules/mysql2/lib/connection.js:81:16)
    at PacketParser.executeStart (/Users/matt/dev/myapp/node_modules/mysql2/lib/packet_parser.js:76:14)
    at Socket.<anonymous> (/Users/matt/dev/myapp/node_modules/mysql2/lib/connection.js:89:29)
    at Socket.emit (events.js:198:13)
    at Socket.EventEmitter.emit (domain.js:448:20)
    at addChunk (_stream_readable.js:288:12)
    at readableAddChunk (_stream_readable.js:269:11)
    at Socket.Readable.push (_stream_readable.js:224:10)
    at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17)
  code: 'ER_PARSE_ERROR',
  errno: 1064,
  sqlState: '42000'
}

I tried wrapping the database name in backticks which seemed to work (eg. it created the table in the website-dev database), but the middleware I created (from here) only returned failures (eg. the 429 error) and did not insert anything into the database.

Limit using Points and duration on demand

Hi @animir Congratulations for your library! Very complete!

I was using Express Rate Limit as the rate limiter for my Express API. However, I have recently changed my app to use Nodejs' Cluster module, so I had to look for another library that supported cluster features. And I found Rate Limiter Flexible.

However I don't know how to implement the same functionality I had: some endpoints had a different limitation configuration, but limitation/blocking was global. For example: /login has a limit of 10 requests per minute, and /users/:userId has a limit of 1000 requests per 5 minutes. If a user consumed the 10 requests in /login, he could not request /user/:userId because he had already reached a 429 error status from another endpoint (/login).

It was a simple middleware like:

// Router middleware
router.post('/login', RateLimiter.limiter(10, 1 * 60), myController) // 10 req in 1 min
router.get('/users/:userId', RateLimiter.limiter(1000, 5 * 60), myController) // 1000 req in 5 min

// Rate limiter
import rateLimit from 'express-rate-limit'
export function limiter (requests, intervalSeconds) {
  return rateLimit({
    windowMs: intervalSeconds || config.defaultIntervalSeconds,
    max: requests || config.defaultMaxRequests,
    onLimitReached, // Log the error
    keyGenerator, // Generate a key from the IP
    skip: () => !config.isEnabled
  })
}

However, I don't know how to implement it with Rate Limiter Flexible, as if I create two instances (one for each endpoint), if user consumes all points in one endpoint, he can still consume points of the second endpoint:

export function limiter (requests, intervalSeconds) {
  return function (req, res, next) {
    // Skip rate-limiter if not enabled
    if (config.isEnabled !== true) return next()

    const allowedRequests = requests || config.maxRequests
    const duration = intervalSeconds || config.intervalSeconds

    rateLimiter = new RateLimiterCluster({
      keyPrefix: 'rate-limiter',
      points: allowedRequests,
      duration,
      timeoutMs: 3000
    })

    return rateLimiter.consume(keyGenerator(req))
      .then(()=> {
        return next()
      })
      .catch(()=> {
        return res.status(429).json({})
      })
  }
}

Is there a way to achieve the same functionality in Rate Limiter Flexible? Or do you think of a way to handle this with this library?

Thank you so much.

Future v2 notes

Remove deprecated:

  1. Remove isWhite and isBlack getters/setters and options support from RLWrapperBlackAndWhite. isWhiteListed and isBlackListed should be used instead.
  2. Remove IRateLimiterResOptions interface from lib/index.d.ts. IRateLimiterRes should be used.

express middleware filter out localhost api calls

I'm running a Cluster/Worker configuration and I do internal api calls to localhost. Having configured the rate limiter as a express middleware, I get these internal api calls limited too:

const rateLimiterMiddleware = (req, res, next) => {
                rateLimiter.consume(req.connection.remoteAddress)
                    .then(() => {
                        next();
                    })
                    .catch(_ => {
                        var errorMessage = APIHelper.createErrorMessage(429, '', '', 'Too Many Requests');
                        res.status(429).send(errorMessage);
                    });
            };
            self.app.use(rateLimiterMiddleware);

It would be possibile to filter out localhost / 127.0.0.1 IP address so that the middleware would skip these internal api calls?

Login endpoint protection example bug

In the "Login endpoint protection" example in the wiki there's a bug awaiting for the consume promises. It should be await Promise.all(promises) and not await promises

CHANGE:

      try {
        const promises = [limiterSlowBruteByIP.consume(ipAddr)];
        if (user.exists) {
          // Count failed attempts by Username + IP only for registered users
          promises.push(limiterConsecutiveFailsByUsernameAndIP.consume(usernameIPkey));
        }

        await promises;

        res.status(400).end('email or password is wrong');
      } catch (rlRejected) {

TO:

      try {
        const promises = [limiterSlowBruteByIP.consume(ipAddr)];
        if (user.exists) {
          // Count failed attempts by Username + IP only for registered users
          promises.push(limiterConsecutiveFailsByUsernameAndIP.consume(usernameIPkey));
        }

        await Promise.all(promises);

        res.status(400).end('email or password is wrong');
      } catch (rlRejected) {
```

`enableOfflineQueue` in redis client

Question

  • Why the redis client should be configured as enableOfflineQueue: false? When I tried without this option, rate-limit does work well.

RateLimiterCluster with PM2 cluster mode and Redis as store client

Hi there. First, thank you for the excellent library!

I'm having an issue when attemping to implement the RateLimiterCluster with PM2 in cluster mode.

I followed the example you wrote here:
https://github.com/animir/node-rate-limiter-flexible/wiki/PM2-cluster

However your example doesn't include a 'storeClient' prop passed to the RateLimiterCluster constructor (even though the TS definitions have IRateLimiterStoreOptions as the options interface accepted by RateLimiterCluster). So I add the storeClient prop anyway, which points to my redisClient (node-redis), and fire everything up.

It seems as if my redis store isn't being touched at all, even though the rate limiting is working. This suggests to me that even though I specified my redisClient as the storeClient, RateLimiterCluster is actually operating in-memory.

Can you tell me how to hook RateLimiterCluster to my redis store? Or perhaps it actually should be in-memory only? And if that's the case, can you change the interface required by the RateLimiterCluster constructor to exclude the storeClient prop?

Thanks very much!

RateLimiterRedis not working with ioredis

Not sure if due to recent changes in ioredis, but RateLimiterRedis is not working with ioredis 4.x.
The issue is how iosredis result format is handled in RateLimiterRedis.js

  _get(rlKey) {
    return new Promise((resolve, reject) => {
      this.client.multi()
        .get(rlKey)
        .pttl(rlKey)
        .exec((err, res) => {
          if (err) {
            reject(err);
          } else {
            const [points] = res;
            if (points === null) {
              res = null;
            } else {
              res.unshift('GET');
            }
            resolve(res);
          }
        });
    });
  }

A typical res value returned by ioredis would look like that: [ [ null, '5' ], [ null, 275680 ] ]
Thus in the code above, points will always be null.
In addition, the res.unshift('GET') is also incorrect, since the code in the _getRateLimiterRes method is expecting the first parameter to be an array in order to get recognize as an ioredis result format:

 _getRateLimiterRes(rlKey, changedPoints, result) {
    let [resSet, consumed, resTtlMs] = result;
    // Support ioredis results format
    if (Array.isArray(resSet)) {
      [, resSet] = resSet;
      [, consumed] = consumed;
      [, resTtlMs] = resTtlMs;
    }
...

Fix is easy, but I am not sure how to PR without breaking non-ioredis redis clients.
In the meantime I fixed my code using this wrapping class:

class RateLimiterIoRedis extends RateLimiterRedis {
  async _get(rlKey) {
    const res = await this.client
      .multi()
      .get(rlKey)
      .pttl(rlKey)
      .exec();

    if (res[0][1] === null) {
      return null;
    }

    return [[null, 'GET'], ...res];
  }
}

const rateLimiter = new RateLimiterIoRedis();

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.