Coder Social home page Coder Social logo

bucket4j / bucket4j Goto Github PK

View Code? Open in Web Editor NEW
2.2K 51.0 279.0 30.91 MB

Java rate limiting library based on token-bucket algorithm.

Home Page: https://bucket4j.com

License: Apache License 2.0

Java 83.71% Groovy 15.83% HTML 0.03% Kotlin 0.17% JavaScript 0.26%
token-bucket hazelcast rate-limiter apache-ignite jcache infinispan rate-limit rate-limiting oracle-coherence

bucket4j's Introduction

Java rate-limiting library based on token-bucket algorithm.

Licence

Get dependency

The Bucket4j is distributed through Maven Central:

Java 17 dependency
<!-- For java 17+ -->
<dependency>
  <groupId>com.bucket4j</groupId>
  <artifactId>bucket4j_jdk17-core</artifactId>
  <version>8.13.1</version>
</dependency>
Java 11 dependency
<!-- For java 11 -->
<dependency>
  <groupId>com.bucket4j</groupId>
  <artifactId>bucket4j_jdk11-core</artifactId>
  <version>8.13.1</version>
</dependency>

Quick start

import io.github.bucket4j.Bucket;

...
// bucket with capacity 20 tokens and with refilling speed 1 token per each 6 second
private static Bucket bucket = Bucket.builder()
      .addLimit(limit -> limit.capacity(20).refillGreedy(10, Duration.ofMinutes(1)))
      .build();

private void doSomethingProtected() {
   if (bucket.tryConsume(1)) {
      doSomething();    
   } else {
      throw new SomeRateLimitingException();
   }
}

More examples can be found there

Bucket4j basic features

  • Absolutely non-compromise precision - Bucket4j does not operate with floats or doubles, all calculation are performed in the integer arithmetic, this feature protects end users from calculation errors involved by rounding.
  • Effective implementation in terms of concurrency:
    • Bucket4j is good scalable for multi-threading case it by defaults uses lock-free implementation.
    • In same time, library provides different concurrency strategies that can be chosen when default lock-free strategy is not desired.
  • Effective API in terms of garbage collector footprint: Bucket4j API tries to use primitive types as much as it is possible in order to avoid boxing and other types of floating garbage.
  • Pluggable listener API that allows to implement monitoring and logging.
  • Rich diagnostic API that allows to investigate internal state.
  • Rich configuration management - configuration of the bucket can be changed on fly

Bucket4j distributed features

In additional to basic features described above, Bucket4j provides ability to implement rate-limiting in cluster of JVMs:

  • Bucket4j out of the box supports any GRID solution which compatible with JCache API (JSR 107) specification.
  • Bucket4j provides the framework that allows to quickly build integration with your own persistent technology like RDMS or a key-value storage.
  • For clustered usage scenarios Bucket4j supports asynchronous API that extremely matters when going to distribute world, because asynchronous API allows avoiding blocking your application threads each time when you need to execute Network request.

Bucket4j is not a framework, it is a library, with Bucket4j you need to write a code to achive your goals. For generic use-cases, try to look at powerfull Spring Boot Starter for Bucket4j, that allows you to set access limits on your API effortlessly. Its key advantage lies in the configuration via properties or yaml files, eliminating the need for manual code authoring.

Supported JCache compatible(or similar) back-ends

In addition to local in-memory buckets, the Bucket4j supports clustered usage scenario on top of following back-ends:

Back-end Async supported Flexible per-entry expiration Optimized serialization Thin-client support Documentation link
JCache API (JSR 107) No No No No bucket4j-jcache
Hazelcast Yes Yes Yes No bucket4j-hazelcast
Apache Ignite Yes No n/a Yes bucket4j-ignite
Inifinispan Yes Yes Yes No bucket4j-infinispan
Oracle Coherence Yes Yes Yes No bucket4j-coherence

Redis back-ends

Back-end Async supported Redis cluster supported Documentation link
Redis/Redisson Yes Yes bucket4j-redis/Redisson
Redis/Jedis No Yes bucket4j-redis/Jedis
Redis/Lettuce Yes Yes bucket4j-redis/Lettuce

JDBC back-ends

Back-end Documentation link
MySQL bucket4j-mysql
PostgreSQL bucket4j-postgresql
Oracle bucket4j-oracle
Microsoft SQL Server bucket4j-mssql
MariaDB bucket4j-mariadb

Local caches support

Sometimes you are having deal with bucket per key scenarios but distributed synchronization is unnecessary, for example where request stickiness is provided by a load balancer, or other use-cases where stickiness can be achieved by the application itself, for example, Kafka consumer. For such scenarios Bucket4j provides support for following list of local caching libraries:

Back-end Documentation link
Caffeine bucket4j-caffeine

Third-party integrations

Back-end Project page
Datomic Database clj-bucket4j-datomic

Have a question?

Feel free to ask via:

License

Copyright 2015-2024 Vladimir Bukhtoyarov Licensed under the Apache Software License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0.

bucket4j's People

Contributors

abelsromero avatar akhomchenko avatar alek-sys avatar alex-plekhanov avatar bbeck avatar cesartl avatar chengdaqi2023 avatar chipkillmar avatar dependabot[bot] avatar fairjm avatar gitter-badger avatar intricate avatar ioanngolovko avatar luankevinferreira avatar maxbartkov avatar mondeveloper avatar namhptran avatar r331 avatar sathiyaseelan avatar schnapster avatar simpleusr avatar skarpushin avatar sschepens avatar sullis avatar tamaro-skaljic avatar tetsukamen avatar tgregory-block avatar ttulka avatar vikinghawk avatar vladimir-bukhtoyarov avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bucket4j's Issues

How to expire tokens in a bucket??

A simple question. I have the following code which limits 10 REST API calls in a minute

Bucket bucket = (Bucket) session.getAttribute(bucketKey);
if (bucket == null) {
	Bandwidth limit = Bandwidth.simple(10, Duration.ofMinutes(1));
	bucket = Bucket4j.builder().addLimit(limit).build();
	session.setAttribute(bucketKey, bucket);
}
// tryConsume returns false immediately if no tokens available with the bucket
if (bucket.tryConsume(1)) {
	// the limit is not exceeded
}else{
	// the limit has exceeded
}

I made 20 REST API calls in 1 minute and the code goes into else condition, which is the correct behavior. But then I waited fir 5 minutes and make a new REST API call and the tokens in the bucket never expire. The code still goes into else condition after 5 minutes. How I make the tokens in the bucket to expire??

Distribute through maven central and change groupId and base package

Current groupId of bucket4j is com.github, but Maven Central requires that groupId of artefact should be a controlled(by author) domain, so groupId should be changed, else it is not possible to publish artefacts to Maven Central:

This issue originally raised in jhipster#5388

Proposed to release 2.0

BandwidthAdjuster

In version 1 i was able to provide a BandwidthAdjuster to the Bucket that allowed me to dynamically set the number of max tokens a bucket can hold. How would i go about doing this in version 2?

Buckets.withNanoTimePrecision().withLimitedBandwidth(customBandwidthAdjuster, TimeUnit.SECONDS, 1, 0).build();

Remove duplicated methods

  • tryConsumeSingleToken() which equals to tryConsume(1)
  • consumeSingleToken which equals to consume(1)
  • consumeSingleToken(maxWaitTimeNanos) which equals to consume(1, maxWaitTimeNanos)

Feature Request: Dynamic configuration of capacity.

Hi, since this feature seems to be deprecated, we need some method or way to be able to change current capacity configuration.

For example: we have a custom configuration capacity param for RateLimit per user's, a normal scenario in which any user change this capacity, with the actual implementation the only way to accomplish this is instantiating a new bucket builder, losing all the current consumed tokens and with no more options to start over.

I came across with the idea of having a special method like .updateCapacity(long capacity) having the logic that if the new capacity is greater, add tokens, otherwise remove remaining tokens if there are.

Of course you will know better than me which implementation would fit better into this feature.

Thanks!

Introduce NanoCloud for distributing testing.

It is need to run tests for JCache on the cluster with many nodes. Currently tests are executing on single cluster node on same JVM, as result we have the risk to miss issues like this #32.

The nanocloud would be the ideal choice to bootstrap cluster in the test, because it is faster than docker and has no problems on the Windows and IOS.

Decouple refill from capacity

https://help.shopify.com/api/getting-started/api-call-limit

The bucket size is 40 calls (which cannot be exceeded at any given time), with a "leak rate" of 2 calls per second that continually empties the bucket. If your app averages 2 calls per second, it will never trip a 429 error ("bucket overflow").

Other APIs have similar burstable capacity. Other Java "leaky bucket" implementations support burstable capacity, but the fact that bucket4j supports distributed tokens through JCache is very interesting.

consume method silently ignores amounts bigger than the total capacity

Consider the following snippet:

Bucket bucket = Bucket4j.builder()
        .withLimitedBandwidth(1, Duration.ofSeconds(1))
        .build();

for (int i = 0; i < 5; i++) {
    bucket.consume(5);
}

You did expect the loop to take 5 * 5 = 25 seconds. But it returns instantly: No exceptions, no wait.

While not supporting this explicitly is an option, an appropriate exception should be thrown; silently retuning seems like a bug to me.

Support different styles of synchronization for local bucket

Currently the only one style of synchronization is supported - lock-free algorithm based on CAS and immutability.

It is need to support two additional types:

  • SYNCHRONIZED: for case when client does not care of contention too much because its primary goal is avoiding of memory allocation.
  • NONE: for case when client does not need in synchronization at all, because synchronization is provided by thirdparty library(like Akka or RxJava), or because client wants to manage synchronization by itself.
public enum ConcurrencyStrategy {

     LOCK_FREE,
     SYNCHRONIZED,
     NONE

}

By default the Builder API should use LOCK_FREE strategy
public Bucket build() {

public class LocalBucketBuilder extends AbstractBucketBuilder<LocalBucketBuilder> {

      public Bucket build() {
            return build(ConcurrencyStrategy.LOCK_FREE);
      }

      public Bucket build(ConcurrencyStrategy concurrencyStrategy) {
           ...
      }

}

Prevent the usage of infinispan through JCache extension.

Because Infinispan does not isolate EntryProcessors from each other, multiple entry processors can do computation on same entry simultaneously, so Infinispan should be used with Bucket4j only through dedicated bucket4j-infinispan module.

  • Add documentation warning notes to JCache wiki page.
  • throw UnsupportedOperationException from JCacheProxy for infinispan cache provider.
  • Create and share the smoke test which can be used to detect Cache provider with similar misbehaviour.

Add support for fixed interval refill

In opposite to smooth refill which adds token as soon as possible, interval refill should regenerate tokens periodically.

Proposed API:

public static Refill fixedInterval(long tokens, Duration period) {
    // ...
}

public static Refill fixedInterval(Instant timeOfFirstRefill, long tokens, Duration period) {
     // ...
}

Possible unsafe BucketState cloning?

Hi,
I noticed the usages of the following BucketState methods in the LockFreeTokenBucket implementations.
I wonder if the cloning / copy of array in Java is atomic? Otherwise could the non-atomicity make LockFreeTokenBucket unsafe?

public BucketState copy() {
  return new BucketState(stateData.clone());
}
public void copyStateFrom(BucketState sourceState) {
  System.arraycopy(sourceState.stateData, 0, stateData, 0, stateData.length);
}
 

After long inactivity period, available token counter becomes negative

I noticed that, at least with some bandwidth configuration, when a long time passes between subsequent calls to Bucket.tryConsume(...), the available token counter becomes negative and Bucket.tryConsume(...) return false, while there should be lots of tokens available.

I guess this could be caused by an arithmetic overflow in executing line 136
long divided = refillTokens * durationSinceLastRefillNanos + roundingError;
in private method io.github.bucket4j.BucketState.refill(int, Bandwidth, long, long).

To reproduce the problem, you can use the following test:

import java.time.Duration;

import org.junit.Test;

import io.github.bucket4j.Bandwidth;
import io.github.bucket4j.Bucket;
import io.github.bucket4j.Bucket4j;
import io.github.bucket4j.TimeMeter;

import static org.junit.Assert.assertTrue;

public class LongWaitTest {
    
    private final class TimeMeterWithArtificialDelay
            implements TimeMeter {
        private static final long serialVersionUID = 286427698074660314L;
        
        final long twelveHourNanos = 12 * 60 * 60 * 1_000_000_000L;
        private long delay = 0L;
        
        @Override
        public long currentTimeNanos() {
            return TimeMeter.SYSTEM_MILLISECONDS.currentTimeNanos() + delay;
        }
        
        void let12HoursPass() {
            delay = twelveHourNanos;
        }
    }
    
    @Test
    public void whenALongTimePassesThenAvailableTokensCounterBecomesNegative() throws InterruptedException {
        final Bandwidth limit1 = Bandwidth.simple(700000, Duration.ofHours(1));
        final Bandwidth limit2 = Bandwidth.simple(14500, Duration.ofMinutes(1));
        final Bandwidth limit3 = Bandwidth.simple(300, Duration.ofSeconds(1));
        final TimeMeterWithArtificialDelay customTimeMeter = new TimeMeterWithArtificialDelay();
        final Bucket bucket = Bucket4j.builder()
                .addLimit(limit1)
                .addLimit(limit2)
                .addLimit(limit3)
                .withCustomTimePrecision(customTimeMeter)
                .build();
        
        assertTrue(bucket.getAvailableTokens() > 0);
        assertTrue(bucket.tryConsume(1));
        
        customTimeMeter.let12HoursPass();
        
        assertTrue("Free tokens expected after long wait", bucket.getAvailableTokens() > 0);
        assertTrue("Free tokens expected after long wait", bucket.tryConsume(1));
    }
    
}

Support Completable feature from java 8

PREFACE:
The Bucket consumption API separated by two type of methods:

  • Non-blocking which names started with "tryConsume".
  • Blocking which names started with "consume".

TODO:

  • It is need to support new kind of blocking operations which should support CompletableFeature for blocking operations, this will provide ability to avoid blocking of current thread and use callback style programming instead of blocking.
  • Rename "consumeAsMuchAsPossible" method to "tryConsumeAsMuchAsPossible" because it is actually non-blocking, and several users already confused by this fact.

Add "consumeUninterruptibly" methods

All current implementation of blocking consumption throws InterruptedException if thread was interrupted during waiting. It is need to add similar uninterruptible methods for blocking consumption:

void consumeUninterruptibly(long numTokens);

boolean consumeUninterruptibly(long numTokens, long maxWaitTimeNanos);

Proposed to release 2.0

Enhance documentation

It is need to split documentation by several pages:

  • A brief overview of "token-bucket" algorithm(just copy from wikipedia) and enhancements which introduced by bucket4j.
  • Brief overview of token-bucket algorithm.
  • Key building blocks of bucket4j and high-level architecture of Bucket4j.
  • Simple usage examples.
  • Integration with JCache.
  • Advanced usage examples.

In order to avoid duplication between javadocs and github documentation need to reduce amount of information in javadocs, where reasonable javadocs should just provide link to github readme pages instead of copypasting.

Is dynamic refill strategy possible?

I was wondering if the current library supports dynamic refill strategy. i.e. the rate to refill the bucket is not fixed. I did not see any interface that expose the refill strategy to end user. In my usage, the rate to refill varies according to some other factors.

Dependency on Oracle jar makes it very hard to use bucket4j

Currently bucket4j ships with a system scope dependency on Oracle Coherence.
This jar is not publicly available. The build notes issues:

[WARNING] Some problems were encountered while building the effective model for com.github:bucket4j:jar:1.1.0-SNAPSHOT
[WARNING] 'dependencies.dependency.systemPath' for com.oracle:coherence:jar should not point at files within the project directory, ${project.basedir}/lib/coherence-3.7.1.5.jar will be unresolvable by dependent projects @ line 268, column 25
[WARNING] It is highly recommended to fix these problems because they threaten the stability of your build.
[WARNING] For this reason, future Maven versions might no longer support building such malformed projects.

Indeed, attempting to use the library causes problems:

[ERROR] Failed to execute goal com.ning.maven.plugins:maven-dependency-versions-check-plugin:2.0.2:check (default) on project chat-service: While running mojo: Some problems were encountered while processing the POMs:
[ERROR] [ERROR] 'dependencies.dependency.systemPath' for com.oracle:coherence:jar must specify an absolute path but is ${project.basedir}/lib/coherence-3.7.1.5.jar @ line 268, column 25

I am able to use bucket4j by patching out Coherence support: master...opentable:oracle-jars
but obviously maintaining my own release is not appealing.

A better solution might be to take your Coherence jar and use the install:file Maven target to include it in your repository. Then you could use a optional or provided dependency so that downstream projects can use bucket4j without being polluted by Oracle crap :)

Add method to locate existed bucket to ProxyManager

Proposed API:

public interface ProxyManager<K extends Serializable> {
    
  ...

  /**
     * Locates proxy to bucket which actually stored outside current JVM.
     *
     * @param key the unique identifier used to point to the bucket in external storage.
     *
     * @return Optional surround the proxy to bucket or empty optional if bucket with specified key are not stored.
     */
    Optional<Bucket> getProxy(K key);

}

Library modularization

  • Design modular architecture.
  • Move JCache support to dedicated module "bucket4j-jcache"

enhancement request

would it be possible to enhance Bucket interface (or at least GridBucket impl) to add a method
that tries (no waiting) to consume up to N tokens at once and returns:

  1. number of tokens actually consumed AND
  2. the expiration date associated with these tokens.
    Thanks
    I can explain my use case if needed.
    Greg

API usage with JCache without caching the proxies on the client side

I'm currently working on jhipster/generator-jhipster#5388

So basically I'm doing a request filter like here in the documentation, but using JCache with Hazelcast like here in the documentation.

Mixing those 2 examples, my current implementation is the following (please note this is a Zuul filter, not a Servlet filter, but this doesn't make any difference):

String bucketId = getId(RequestContext.getCurrentContext().getRequest());
        
Bucket bucket = Bucket4j.jCacheBuilder(RecoveryStrategy.RECONSTRUCT)
    .addLimit(Bandwidth.simple(jHipsterProperties.getGateway().getRateLimiting().getLimit(),
        Duration.ofSeconds(3_600))
    .build(cache, bucketId);

if (bucket.tryConsumeSingleToken()) {
    // the limit is not exceeded
    log.debug("API rate limit OK for {}", bucketId);
} else {
    // limit is exceeded
    log.info("API rate limit exceeded for {}", bucketId);
    apiLimitExceeded();
}
return null;

This code works, and follows the documentation, but it worries me that we "create" a new Bucket for each HTTP request. This looks rather heavy, and I'd better re-use the Bucket that was created.

However, we just store a BucketState in the cache, and I can't find a way to get a Bucket out of the cache.

Am I doing something wrong? I currently think the API should be modified, or maybe better documented, because I can't find a good solution here.

Add support for relational databases

All JDBC compatible databases with sql SELECT FOR UPDATE feature can be easy supported by Bucket4j algorithm. Steps of algorithm to solve concurrency issues are following:

  • Begin transaction.
  • SELECT FOR UPDATE the state of bucket.
  • UPDATE the state of bucket.
  • Commit transaction.

Obvious, the latency and throughput the solution based on JCache will be dramatically faster, because algorithm for JCache has single step(result achieving in one network hop), but relational database is well known solution when in-memory grids is very fresh concept, so JDBC will be a good starting point for people which need in distributed bucket, but have no time to learn or low on resources to deploy the JCache based solution.

Auto handle splitbrain for JCache

For distributed case(JCache), it is possible to come in situation when after initialization the Bucket can not be fetched from the cache, for example by:

  • Split-brain happen and client are connected to the piece of cluster which does not hold bucket state.
  • The bucket state is stored on single grid node without replication and this node was crashed.
  • The bucket was removed by mistake(wrong configuration, human mistake, vendor bug).

It is need to auto detect this kind of problem and apply recovery strategy specified by client at bucket construction time. Client must explicitly choose from two possible reactions:

  • RECREATE_BUCKET initialize bucket yet another time if availability has more priority than consistency.
  • THROW_BUCKET_NOT_EXISTS_EXCEPTION do nothing and throw exception, if consistency has more priority than availability.

Any way to ask the bucket how long until next token?

I'm considering using bucket4j to rate limit calling a rate limited API. In one case I need to be able to report, if the rate limit is reached, how long until a new token will be available.

Is there a way to do this?

Cheers

Cameron

Getting bandwidth from bucket.

Would it be possible to extend the Bucket interface to include a method returning information about the current status of a given bucket?

Introduce asynchronyous API

Reason.
It is nice to have additional API implemented in this style, because when working with external back-ends we are introducing additional latency, and do blocking of current thread can be undesirable sometimes for end user.

Potentional problems:

  • Several back-ends like JDBC compatible database do not provide the support of async API, so asynchronious API at Bucket4j level should be optional featture which depends from concrete back-end.
  • Because JCache does not provide async API it is need to reintroduce support for particular in-memory grid(which was removed in scope of #10). Users who is not needed for async API can stay on the JCache abstraction, users who need for async api should migrate to concrete implementation. It is need to support directly following grids: Ignite, Hazelcast, Infinispan.

How to update special bucket by key

I'am want to use bucket4j with hazelcase backend jcache.
When limit policy changed ,how can I change the limit bucket on runtime without restart my service?

Fix serialization problem.

JCacheCommand does not implements java.io.Serializable interface, so bucket4j is broken for all JCache implementations which relies on java built-in serialization mechanism(at least Hazelcast).

Affected versions 1.3.0, 2.0.0.

Implement "addTokens" method

This method should increase count of tokens in the bucket:

void addTokens(long tokens);

The "compensation transaction" is one of possible use cases, when any piece of code consumed tokens from bucket, tried to do something and failed, the "addTokens" will be helpful to return tokens back to bucket.

Add monitoring feature

Following points should be exposed:

  • Token consumption.
  • Token rejection.
  • Amount of time spent in waiting to bucket refill.
  • Count of InterruptedExceptions happens during waiting.

Design considerations:

  • Monitoring must be pluggable, most likely statistic should be implemented as listener with methods onConsumed, onRejected, onWait, onInterrupt, it should be responsibility of client to decide how to this data should be accumulated, aggregated and exposed.
  • Reference implementation on top of Dropwizard-Metrics should be provided.

Also it is need to support requqirements from #7

Do compatibility reasearch related to JCache implementation and publish results and verification tools

There are many non-certified implementations of JCache specification(like Ehcache) on the market. They want to increase own popularity by declaring support of JCache API, but often only API is supported and semantic of JCache is totally ignored. Usage Bucket4j with this kind of libraries should be totally prevented. Because Buket4j is compatible only with implementations which obey the JCache specification rules(especially related to EntryProcessor execution) without any exception. The Oracle Coherence, Apache Ignite, Hazelcast are good examples of well formed implementations of JCache.

What is need to do:

  • Investigate compatibility of Bucket4j with several non-certified libraries from second league, try to figure out incapability trends and common issues. Prevent usage of Bucket4j with incompatible JCache providers found in scope of this research.
  • Publish the tool which any can use to to determine compatibility of Bucket4j with any JCache implementation.

Get actual count of tokens in a Bucket

This is more a question than a issue. Hopefully anyone can help me!
Is it possible to determine the amount of token in a bucket at runtime?
I' ve read the docs but I don't see any possibility to do so?

After bucket.tryConsume(1) I need the amount of tokens left in the bucket!

Support VoltDb

With this in-memory database can be achieved both persistence similar to relational databases(VoltDb uses snapshots and writes ahead logs) and low latency mostly similar to JCache implementation(VoltDb supports stored procedures in Java).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.