Coder Social home page Coder Social logo

redisbloom / redisbloom Goto Github PK

View Code? Open in Web Editor NEW
1.6K 38.0 251.0 1.71 MB

Probabilistic Datatypes Module for Redis

Home Page: https://redis.io/docs/stack/bloom/

License: Other

Makefile 3.75% C 50.00% Python 38.03% Dockerfile 0.30% Shell 7.91%
bloom-filter redis redis-module redisbloom

redisbloom's Introduction

GitHub issues CircleCI Dockerhub codecov

RedisBloom: Probabilistic Data Structures for Redis

Forum Discord

logo

Overview

RedisBloom adds a set of probabilistic data structures to Redis, including Bloom filter, Cuckoo filter, Count-min sketch, Top-K, and t-digest. Using this capability, you can query streaming data without needing to store all the elements of the stream. Probabilistic data structures each answer the following questions:

  • Bloom filter and Cuckoo filter:
    • Did value v already appear in the data stream?
  • Count-min sketch:
    • How many times did value v appear in the data stream?
  • Top-k:
    • What are the k most frequent values in the data stream?
  • t-digest:
    • Which fraction of the values in the data stream are smaller than a given value?
    • How many values in the data stream are smaller than a given value?
    • Which value is smaller than p percent of the values in the data stream? (What is the p-percentile value?)
    • What is the mean value between the p1-percentile value and the p2-percentile value?
    • What is the value of the nᵗʰ smallest/largest value in the data stream? (What is the value with [reverse] rank n?)

Answering each of these questions accurately can require a huge amount of memory, but you can lower the memory requirements drastically at the cost of reduced accuracy. Each of these data structures allows you to set a controllable trade-off between accuracy and memory consumption. In addition to having a smaller memory footprint, probabilistic data structures are generally much faster than accurate algorithms.

RedisBloom is part of Redis Stack.

How do I Redis?

Learn for free at Redis University

Build faster with the Redis Launchpad

Try the Redis Cloud

Dive in developer tutorials

Join the Redis community

Work at Redis

Setup

You can either get RedisBloom setup in a Docker container or on your own machine.

Docker

To quickly try out RedisBloom, launch an instance using docker:

docker run -p 6379:6379 -it --rm redis/redis-stack-server:latest

Build it yourself

You can also build RedisBloom on your own machine. Major Linux distributions as well as macOS are supported.

First step is to have Redis installed, of course. The following, for example, builds Redis on a clean Ubuntu docker image (docker pull ubuntu):

mkdir ~/Redis
cd ~/Redis
apt-get update -y && apt-get upgrade -y
apt-get install -y wget make pkg-config build-essential
wget https://download.redis.io/redis-stable.tar.gz
tar -xzvf redis-stable.tar.gz
cd redis-stable
make distclean
make
make install

Next, you should get the RedisBloom repository from git and build it:

apt-get install -y git
cd ~/Redis
git clone --recursive https://github.com/RedisBloom/RedisBloom.git
cd RedisBloom
./sbin/setup
bash -l
make

Then exit to exit bash.

Note: to get a specific version of RedisBloom, e.g. 2.4.5, add -b v2.4.5 to the git clone command above.

Next, run make run -n and copy the full path of the RedisBloom executable (e.g., /root/Redis/RedisBloom/bin/linux-x64-release/redisbloom.so).

Next, add RedisBloom module to redis.conf, so Redis will load when started:

apt-get install -y vim
cd ~/Redis/redis-stable
vim redis.conf

Add: loadmodule /root/Redis/RedisBloom/bin/linux-x64-release/redisbloom.so under the MODULES section (use the full path copied above).

Save and exit vim (ESC :wq ENTER)

For more information about modules, go to the Redis official documentation.

Run

Run redis-server in the background and then redis-cli:

cd ~/Redis/redis-stable
redis-server redis.conf &
redis-cli

Give it a try

After you setup RedisBloom, you can interact with it using redis-cli.

Create a new bloom filter by adding a new item:

# 127.0.0.1:6379> BF.ADD newFilter foo
(integer) 1

Find out whether an item exists in the filter:

# 127.0.0.1:6379> BF.EXISTS newFilter foo
(integer) 1

In this case, 1 means that the foo is most likely in the set represented by newFilter. But recall that false positives are possible with Bloom filters.

# 127.0.0.1:6379> BF.EXISTS newFilter bar
(integer) 0

A value 0 means that bar is definitely not in the set. Bloom filters do not allow for false negatives.

Client libraries

Project Language License Author Stars Package Comment
jedis Java MIT Redis Stars Maven
redis-py Python MIT Redis Stars pypi
node-redis Node.JS MIT Redis Stars npm
nredisstack .NET MIT Redis Stars nuget
redisbloom-go Go BSD Redis Stars GitHub
rueidis Go Apache License 2.0 Rueian Stars GitHub
rebloom JavaScript MIT Albert Team Stars GitHub
phpredis-bloom PHP MIT Rafa Campoy Stars GitHub
phpRebloom PHP MIT Alessandro Balasco Stars GitHub
vertx-redis-client Java Apache License 2.0 Eclipse Vert.x Stars GitHub
rustis Rust MIT Dahomey Technologies Stars GitHub

Documentation

Documentation and full command reference at redisbloom.io.

Mailing List / Forum

Got questions? Feel free to ask at the RedisBloom mailing list.

License

RedisBloom is licensed under the Redis Source Available License 2.0 (RSALv2) or the Server Side Public License v1 (SSPLv1).

redisbloom's People

Contributors

alonre24 avatar amiramm avatar ashtul avatar casidiablo avatar chayim avatar dvirsky avatar filipecosta90 avatar fusl avatar gavincastleton avatar gkorland avatar iddm avatar itamarhaber avatar k-jo avatar kukey avatar leibale avatar liorkogan avatar liuchong avatar mnunberg avatar natoscott avatar nermiller avatar ofirmos avatar petershinners avatar rafie avatar sav-norem avatar sazzad16 avatar shacharpash avatar swilly22 avatar tezc avatar tomerhekmati avatar trevor211 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

redisbloom's Issues

Any advice about how to transfer a large bloom filter between machines?

This isn't truly an issue with ReBloom itself, but I thought that you might have some advice about how to approach it. I feel like this will probably be a pretty common thing that people would like to do with bloom filters, so it might be worth thinking about some way to help support it.

As mentioned in my other issue, I have a pretty large bloom filter (~320M items). My hope was to be able to build this bloom filter locally, then dump it to a file, transfer to the production server, and restore it there. This would avoid having to put all the source data on the production server and do the work of loading it into the bloom filter there. It also means I could restore it immediately if I ever need to reset Redis, instead of needing to re-add all the data from scratch each time.

However, the issue I've run into is that Redis only seems to be able to dump/restore a maximum of 512MB, due to the limits of the string data type. Attempting to use the RESTORE command to load a dumped bloom filter larger than this just fails (silent failure with redis-cli, "Broken pipe" errors through client).

Do you have any advice for a method I could use to transfer a large bloom filter and restore it?

Create Docker Image

Please provide a docker image and a Make target to build and push it, follow (or copy) what we're doing in RediSearch. Also, please send me your docker hub user and I'll provide push access to the repo.

Thanks!

License change

Why was the license changed from Apache 2.0?

This is simply an implementation of common data structures on top of a BSD 3-clause daemon, and its main utility is as a template for people to build their own modules. That utility is eliminated with the bizarro proprietary license.

From the star count it appears to have almost no users, so it is not like there was a large captive corporate audience to lock in here.

Adding 10 Millions entires

Hi Experts,

I am adding 10M unique device id to the filter, which is resulting in 171MB. Is that normal size, which should I expect? How should I optimize more on the space? I want to store 100M of keys for 500 filters, what should be the correct architecture for that?

Here's the debug info

127.0.0.1:6379> BF.DEBUG newFilter
 1) "size:9046720"
 2) "bytes:128 bits:1024 hashes:7 capacity:106 size:106 ratio:0.01"
 3) "bytes:512 bits:4096 hashes:9 capacity:328 size:328 ratio:0.0025"
 4) "bytes:2048 bits:16384 hashes:12 capacity:975 size:975 ratio:0.0003125"
 5) "bytes:8192 bits:65536 hashes:16 capacity:2903 size:2903 ratio:1.95313e-05"
 6) "bytes:32768 bits:262144 hashes:21 capacity:8801 size:8801 ratio:6.10352e-07"
 7) "bytes:131072 bits:1048576 hashes:27 capacity:27278 size:27278 ratio:9.53674e-09"
 8) "bytes:524288 bits:4194304 hashes:34 capacity:86413 size:86413 ratio:7.45058e-11"
 9) "bytes:2097152 bits:16777216 hashes:42 capacity:279250 size:279250 ratio:2.91038e-13"
10) "bytes:8388608 bits:67108864 hashes:51 capacity:918498 size:918498 ratio:5.68434e-16"
11) "bytes:33554432 bits:268435456 hashes:61 capacity:3068163 size:3068163 ratio:5.55112e-19"
12) "bytes:134217728 bits:1073741824 hashes:72 capacity:10388345 size:4654005 ratio:2.71051e-22"

No key / no error?

Currently if you do something like this:

> DEL somekey
(interger) 1
> BF.MEXISTS somekey one two three
(error) ERR not found

So, logically, if the bloom filter is not yet created, these items will not be in it. In the app code, I usually just search for 'not found' in the error, but it's an extra conditional. Would it be possible to have an alternate BF.EXISTS/BF.MEXISTS command that returns 0 even if the key is nonexistent? It would make writing code against it more consistent. Example:

> DEL somekey
(interger) 1
> BF.XMEXISTS somekey one two three
1) (integer) 0
2) (integer) 0
3) (integer) 0

Counting?

Just wondering if counting bloom (n-bit counter per bucket) or even Cuckoo filter is planned?

Option to specify number of bits to use per bucket (default is 1) and thus enable counts if/when needed (knowing of course it will increase storage requirements) might not be too much out of the scope.

If it's not planned LUA workaround with n additional BFs is option as well...

BTW: Great work!

EDIT: Ups... missed existing Cockoo filter module. And for some use cases countminsketch module might fit as well.

slave replication

when doing redis node in cluster fails over to slave the bloom keys are lost, are they suppose to be replicated to slave?

Large bloom filters seem to return 100% false positives

I'm trying to use ReBloom to create a bloom filter for checking user passwords against the 300M+ breached password hashes that Troy Hunt released recently.

However, it seems like using it for a bloom filter with this many items does not work, and just results in a filter that says it has a size of 1, returns 1 for all BF.EXISTS checks, and fails to add anything after the first item. I try to create a bloom filter with an expected size of 350,000,000 items (a bit of room to grow with further updates) and 0.01 error rate:

127.0.0.1:6379> BF.RESERVE breached 0.01 350000000
OK

127.0.0.1:6379> BF.ADD breached asdf
(integer) 1

127.0.0.1:6379> BF.ADD breached somethingelse
(integer) 0

127.0.0.1:6379> BF.MADD breached a s d f
1) (integer) 0
2) (integer) 0
3) (integer) 0
4) (integer) 0

127.0.0.1:6379> BF.debug breached
1) "size:1"
2) "bytes:536870912 bits:4294967296 hashes:7 capacity:448089842 size:1 ratio:0.01"

127.0.0.1:6379> BF.EXISTS breached anything
(integer) 1

127.0.0.1:6379> BF.EXISTS breached ididntaddthis
(integer) 1

Valgrind error upon saving

Running "valgrind --track-origins=yes redis-server --loadmodule ./rebloom.so" and then "redis-cli save" results in ther following error -

==32716== Conditional jump or move depends on uninitialised value(s)
==32716==    at 0x14AA3C: lzf_compress (lzf_c.c:153)
==32716==    by 0x162585: rdbSaveLzfStringObject.part.2 (rdb.c:349)
==32716==    by 0x162954: rdbSaveLzfStringObject (rio.h:95)
==32716==    by 0x162954: rdbSaveRawString (rdb.c:421)
==32716==    by 0x1B47F3: RM_SaveStringBuffer (module.c:3221)
==32716==    by 0x163B52: rdbSaveObject (rdb.c:978)
==32716==    by 0x1645B3: rdbSaveKeyValuePair (rdb.c:1041)
==32716==    by 0x164C71: rdbSaveRio (rdb.c:1145)
==32716==    by 0x1651EA: rdbSave (rdb.c:1244)
==32716==    by 0x16809E: saveCommand (rdb.c:2396)
==32716==    by 0x142DC4: call (server.c:2439)
==32716==    by 0x1434CE: processCommand (server.c:2733)
==32716==    by 0x153FB0: processInputBuffer (networking.c:1470)
==32716==  Uninitialised value was created by a stack allocation
==32716==    at 0x14A96A: lzf_compress (lzf_c.c:105)

Make CUCKOO_BKTSIZE configurable

Larger bucket size increases the fill rate for filters but slows it down and increases the error rate.
Currently, default is 2.

bf.exists cannot differentiate between no bloomfilter(key) and no item in BF

Seems the bf.exists command just return 0 for the case with no bloomfilter exists and no value in bloom filter. So to ensure an element is indeed not in bloomfilter, first need to verify the bloom filter keys exists and then the value is indeed not in bloomfilter, which is corresponding to 2 redis operation.

Is there other ways to deal with this situation?

Leave a comment if I did not use it properly.

latest build not working

when docker run

1:M 17 May 07:06:03.064 # Module /var/lib/redis/modules/rebloom.so failed to load: /var/lib/redis/modules/rebloom.so: c
annot open shared object file: No such file or directory
1:M 17 May 07:06:03.064 # Can't load module from /var/lib/redis/modules/rebloom.so: server aborting

Documentation - Suggested item hash functions.

Hello! I have been using rebloom successfully for some months now.

One of the questions I have had back from when I first heard about this wonderful project, was which is the best or suggested way to hash an item before inserting it into the filter.

I believe it would be really helpful to have some hints in the docs or even better some pseudo code examples.

Also what do you think about using sha256?

Thank you for this project!

EDIT:
PS: It would be nice to have an equation calculating the total RAM needed for a given item number and a false positive probability. For example I have calculated that for a capacity of 1000000000 items and an error probability of 0.000001 the size of the produced bloom filter is somewhere around 4GB.

The bitmap scales by 4 times each scale, can i specify this param when loading redisbloom or using BF.RESERVE?

  1. "size:975"
  2. "bytes:4 bits:32 hashes:10 hashwidth:64 capacity:2 size:2 ratio:0.001"
  3. "bytes:16 bits:128 hashes:12 hashwidth:64 capacity:7 size:7 ratio:0.00025"
  4. "bytes:64 bits:512 hashes:15 hashwidth:64 capacity:23 size:23 ratio:3.125e-05"
  5. "bytes:256 bits:2048 hashes:19 hashwidth:64 capacity:74 size:74 ratio:1.95313e-06"
  6. "bytes:1024 bits:8192 hashes:24 hashwidth:64 capacity:236 size:236 ratio:6.10352e-08"
  7. "bytes:4096 bits:32768 hashes:30 hashwidth:64 capacity:757 size:633 ratio:9.53674e-10”

when the memory scales, 4 times the last memory is acquired.
when using small bitmap, its ok,
if i am using redisbloom with 10billion keys, i can reserve 10billion capacity when create this filter, but what if the (10 billion+1) key is inserted? i think no one can bare 4 times more memory.

can i specify this scaling param ?
Or any other suggestions?

Is it possible to set a exp time on an item?

[CF.ADD|BF.ADD] {key} {item} [expiration EX seconds]

I don't know if my desires is right, but is there any plans in future to add this feature? or maybe this is not a Bloom algorithms scope.

It would be the best solution for managing web server pet tokens.

Nice jobs guys.

Cheers!

How to scale?

Is it possible to utilize redis cluster mode to scale rebloom? otherwise the single redis node will handle all requests and create a bottleneck.

Benchmarks against github.com/armon/bloomd

We currently use bloomd for large (billions of entries in aggregate) bloom filters, sharded down into one per core at an optimal size for performance. We're also using Redis for various tasks, and it would be interesting to unify these tasks if possible to simplify management.

Can you guys publish some basic benchmarks against bloomd for bfs of 100M-400M entries?

high false positive

hello, i use rebloom 1.1.0 and i get very high false positive ratio, around 25%.
i add values using BF.MADD, i only observe this behavior in big keys 478M, smaller bloom filter seems to be in range of the expected false positive ratio.

127.0.0.1:6379> BF.DEBUG ANM:SIG:ATTR:headers_order_hash
1) "size:478361140"
2) "bytes:2147483648 bits:17179869184 hashes:20 capacity:597453122 size:478361140 ratio:1e-06"
127.0.0.1:6379> info
# Server
redis_version:4.0.2
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:9c9662fcd2cfbeff
redis_mode:cluster
os:Linux 4.14.11-coreos x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:6.3.0
process_id:1
run_id:9d4deb4bac5a3c071f61e1a31abbf09c392771e7
tcp_port:6379
uptime_in_seconds:18562
uptime_in_days:0
hz:10
lru_clock:7902339
executable:/data/redis-server
config_file:/etc/redis/redis.conf

# Clients
connected_clients:1
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:2152622328
used_memory_human:2.00G
used_memory_rss:548593664
used_memory_rss_human:523.18M
used_memory_peak:5821452136
used_memory_peak_human:5.42G
used_memory_peak_perc:36.98%
used_memory_overhead:4619816
used_memory_startup:3504680
used_memory_dataset:2148002512
used_memory_dataset_perc:99.95%
total_system_memory:7840636928
total_system_memory_human:7.30G
used_memory_lua:49152
used_memory_lua_human:48.00K
maxmemory:7055867904
maxmemory_human:6.57G
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.25
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:8631427
rdb_bgsave_in_progress:0
rdb_last_save_time:1517833235
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:98304
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats
total_connections_received:1430
total_commands_processed:34165466
instantaneous_ops_per_sec:1
total_net_input_bytes:124813682317
total_net_output_bytes:7978379220
instantaneous_input_kbps:0.05
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:1
sync_partial_ok:0
sync_partial_err:1
expired_keys:0
evicted_keys:0
keyspace_hits:8886668
keyspace_misses:38134
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:334
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:1
slave0:ip=10.240.4.115,port=6379,state=online,offset=1519847305,lag=0
master_replid:4f70f6b66a614c29d9796332ea301dbe4d7df1b3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1519847305
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1518798730
repl_backlog_histlen:1048576

# CPU
used_cpu_sys:326.01
used_cpu_user:4052.23
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Cluster
cluster_enabled:1

# Keyspace
db0:keys=1,expires=0,avg_ttl=0

Overflow problem

I got a strange behaviour when number of items in a filter reached the reserved capacity: a new filter is chained but I can't add any items there. BF.ADD command always returns 0 for any key, debug info shows that this filter has size=1

BF.DEBUG my_bloom
1) "size:112022461"
2) "bytes:268435456 bits:2147483648 hashes:14 capacity:112022460 size:112022460 ratio:0.0001"
3) "bytes:1073741824 bits:8589934592 hashes:16 capacity:389468927 size:1 ratio:2.5e-05"

It is also can be reproduced with commit 54effb245a5f78c2111bfe91b892dffc7718f980

Allow TopK increase by value

At the moment, TOPK.ADD does increment of 1.
For some use cases, the ability to increase by an integer will be useful.

No counting bloom filter

The old redabloom module provided counting bloom filters, but rebloom does not. Is this something that is likely to get fixed or are counting filters not on anyone's roadmap for rebloom?

Out Of Memory allocating error

Hello. I use rebloom with redis 4.0.1, as for the rebloom version, I cloned this repo on July 27.
I have around 130M string keys that I tried to add to a bloom filter with default size. I used BF.MADD with 200 arguments each. Every time I got the error Out Of Memory allocating 18446744073441116160 bytes!, also every time I had the same key in client info: '1471273593317254873_1504545000', however I've successfully added this single key with BF.ADD after the error. All keys are string with the same format.

I gathered some debug info for this bloom filter:

127.0.0.1:6379> BF.DEBUG my_bloom
 1) "size:10449084"
 2) "bytes:120 bits:958 hashes:7 capacity:100 size:100 ratio:0.01"
 3) "bytes:312 bits:2494 hashes:9 capacity:200 size:200 ratio:0.0025"
 4) "bytes:840 bits:6719 hashes:12 capacity:400 size:400 ratio:0.0003125"
 5) "bytes:2257 bits:18055 hashes:16 capacity:800 size:800 ratio:1.95313e-05"
 6) "bytes:5957 bits:47652 hashes:21 capacity:1600 size:1600 ratio:6.10352e-07"
 7) "bytes:15376 bits:123004 hashes:27 capacity:3200 size:3200 ratio:9.53674e-09"
 8) "bytes:38831 bits:310642 hashes:34 capacity:6400 size:6400 ratio:7.45058e-11"
 9) "bytes:96127 bits:769016 hashes:42 capacity:12800 size:12800 ratio:2.91038e-13"
10) "bytes:233804 bits:1870429 hashes:51 capacity:25600 size:25600 ratio:5.68434e-16"
11) "bytes:559940 bits:4479518 hashes:61 capacity:51200 size:51200 ratio:5.55112e-19"
12) "bytes:1323011 bits:10584088 hashes:72 capacity:102400 size:102400 ratio:2.71051e-22"
13) "bytes:3089218 bits:24713743 hashes:84 capacity:204800 size:204800 ratio:6.61744e-26"
14) "bytes:7138694 bits:57109549 hashes:97 capacity:409600 size:409600 ratio:8.07794e-30"
15) "bytes:16345635 bits:130765080 hashes:111 capacity:819200 size:819200 ratio:4.93038e-34"
16) "bytes:37123230 bits:296985834 hashes:126 capacity:1638400 size:1638400 ratio:1.50463e-38"
17) "bytes:83701305 bits:669610439 hashes:142 capacity:3276800 size:3276800 ratio:2.29589e-43"
18) "bytes:187494158 bits:1499953264 hashes:159 capacity:6553600 size:3895584 ratio:1.75162e-48"

Here is the full crash log:

9708:C 31 Jul 15:24:31.387 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
9708:C 31 Jul 15:24:31.387 # Redis version=4.0.1, bits=64, commit=00000000, modified=0, pid=9708, just started
9708:C 31 Jul 15:24:31.387 # Configuration loaded
9709:M 31 Jul 15:24:31.390 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 4.0.1 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 9709
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

9709:M 31 Jul 15:24:31.391 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
9709:M 31 Jul 15:24:31.392 # Server initialized
9709:M 31 Jul 15:24:31.392 * Module 'bf' loaded from /home/deploy/rebloom/rebloom.so
9709:M 31 Jul 15:24:31.392 * DB loaded from disk: 0.000 seconds
9709:M 31 Jul 15:24:31.392 * Ready to accept connections
9709:M 31 Jul 16:20:37.645 # Out Of Memory allocating 18446744073441116160 bytes!


=== REDIS BUG REPORT START: Cut & paste starting from here ===
9709:M 31 Jul 16:20:37.660 # ------------------------------------------------
9709:M 31 Jul 16:20:37.660 # !!! Software Failure. Press left mouse button to continue
9709:M 31 Jul 16:20:37.660 # Guru Meditation: Redis aborting for OUT OF MEMORY #server.c:3539
9709:M 31 Jul 16:20:37.660 # (forcing SIGSEGV in order to print the stack trace)
9709:M 31 Jul 16:20:37.660 # ------------------------------------------------
9709:M 31 Jul 16:20:37.660 # Redis 4.0.1 crashed by signal: 11
9709:M 31 Jul 16:20:37.660 # Crashed running the instuction at: 0x4674f7
9709:M 31 Jul 16:20:37.660 # Accessing address: 0xffffffffffffffff
9709:M 31 Jul 16:20:37.660 # Failed assertion: <no assertion failed> (<no file>:0)

------ STACK TRACE ------
EIP:
/usr/local/bin/redis-server 127.0.0.1:6379(_serverPanic+0x137)[0x4674f7]

Backtrace:
/usr/local/bin/redis-server 127.0.0.1:6379(logStackTrace+0x29)[0x468f99]
/usr/local/bin/redis-server 127.0.0.1:6379(sigsegvHandler+0xac)[0x46969c]
/lib/x86_64-linux-gnu/libpthread.so.0(+0x10330)[0x7080e7902330]
/usr/local/bin/redis-server 127.0.0.1:6379(_serverPanic+0x137)[0x4674f7]
/usr/local/bin/redis-server 127.0.0.1:6379(redisOutOfMemoryHandler+0x2e)[0x42a03e]
/usr/local/bin/redis-server 127.0.0.1:6379(zcalloc+0x49)[0x432e99]
/home/deploy/rebloom/rebloom.so(bloom_init+0xa0)[0x7080e50491c0]

------ INFO OUTPUT ------
# Server
redis_version:4.0.1
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:5b92c25c761821fd
redis_mode:standalone
os:Linux 3.14.32-xxxx-grs-ipv6-64 x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.4
process_id:9709
run_id:4e2ca34d9095e3fdff3861c249710743c70fbbb1
tcp_port:6379
uptime_in_seconds:3366
uptime_in_days:0
hz:10
lru_clock:8338613
executable:/usr/local/bin/redis-server
config_file:/etc/redis/6379.conf

# Clients
connected_clients:2
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0

# Memory
used_memory:361666968
used_memory_human:344.91M
used_memory_rss:343490560
used_memory_rss_human:327.58M
used_memory_peak:361666968
used_memory_peak_human:344.91M
used_memory_peak_perc:100.00%
used_memory_overhead:832328
used_memory_startup:765688
used_memory_dataset:360834640
used_memory_dataset_perc:99.98%
total_system_memory:33684324352
total_system_memory_human:31.37G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:10000000000
maxmemory_human:9.31G
maxmemory_policy:noeviction
mem_fragmentation_ratio:0.95
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1501507471
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats
total_connections_received:3
total_commands_processed:13533345
instantaneous_ops_per_sec:12456
total_net_input_bytes:921368095
total_net_output_bytes:108290842
instantaneous_input_kbps:751.90
instantaneous_output_kbps:97.31
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:3
keyspace_misses:2
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication
role:master
connected_slaves:0
master_replid:f3d3d6bcf21bb76214441853e172ce5da7fedad7
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU
used_cpu_sys:191.61
used_cpu_user:150.84
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

# Commandstats
cmdstat_command:calls=2,usec=1281,usec_per_call=640.50

# Cluster
cluster_enabled:0

# Keyspace
db0:keys=3,expires=0,avg_ttl=0

------ CLIENT LIST OUTPUT ------
id=2 addr=127.0.0.1:39036 fd=7 name= age=3366 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=4 oll=0 omem=0 events=r cmd=BF.MADD
id=4 addr=127.0.0.1:49139 fd=8 name= age=247 idle=10 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=0 obl=0 oll=0 omem=0 events=r cmd=BF.DEBUG

------ CURRENT CLIENT INFO ------
id=2 addr=127.0.0.1:39036 fd=7 name= age=3366 idle=0 flags=N db=0 sub=0 psub=0 multi=-1 qbuf=0 qbuf-free=32768 obl=4 oll=0 omem=0 events=r cmd=BF.MADD
argv[0]: 'BF.MADD'
argv[1]: 'my_bloom'
argv[2]: '1471273593317254873_1504545000'
9709:M 31 Jul 16:20:37.681 # key 'my_bloom' found in DB containing the following object:
9709:M 31 Jul 16:20:37.681 # Object type: 5
9709:M 31 Jul 16:20:37.681 # Object encoding: 0
9709:M 31 Jul 16:20:37.681 # Object refcount: 1

------ REGISTERS ------
9709:M 31 Jul 16:20:37.681 #
RAX:0000000000000000 RBX:00000000004f3ed5
RCX:00000000fbad000c RDX:0000000000000000
RDI:00007080e78eb760 RSI:0000000000000000
RBP:0000000000000dd3 RSP:00007e2573d99c30
R8 :00000000031a49c0 R9 :00007080e78eb7b8
R10:00007080e78eb7b8 R11:0000000000000206
R12:00007080e6c21fb8 R13:a6500f74861faa5d
R14:00007080e524b630 R15:0000000000000000
RIP:00000000004674f7 EFL:0000000000010202
CSGSFS:0000000000000033
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3f) -> 00000000004d4c9e
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3e) -> 00000000000a1000
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3d) -> 00007080e7200188
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3c) -> 00007e2573d99d30
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3b) -> 00007e2573d99d30
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c3a) -> 00007080e7200180
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c39) -> 0000000000000000
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c38) -> 0000000000000000
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c37) -> 59524f4d454d2046
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c36) -> 4f2054554f20726f
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c35) -> 6620676e6974726f
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c34) -> 6261207369646552
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c33) -> 00007e2573d99d60
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c32) -> 00007e2573d99e30
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c31) -> 0000003000000018
9709:M 31 Jul 16:20:37.681 # (00007e2573d99c30) -> 00000000004d3c40

------ FAST MEMORY TEST ------
9709:M 31 Jul 16:20:37.690 # Bio thread for job type #0 terminated
9709:M 31 Jul 16:20:37.690 # Bio thread for job type #1 terminated
9709:M 31 Jul 16:20:37.690 # Bio thread for job type #2 terminated
*** Preparing to test memory region 745000 (98304 bytes)
*** Preparing to test memory region 3196000 (139264 bytes)
*** Preparing to test memory region 7080cf400000 (364904448 bytes)
*** Preparing to test memory region 7080e524d000 (8388608 bytes)
*** Preparing to test memory region 7080e5b34000 (8388608 bytes)
*** Preparing to test memory region 7080e63e0000 (8388608 bytes)
*** Preparing to test memory region 7080e6c00000 (2097152 bytes)
*** Preparing to test memory region 7080e7200000 (2097152 bytes)
*** Preparing to test memory region 7080e78ed000 (20480 bytes)
*** Preparing to test memory region 7080e7b0c000 (16384 bytes)
*** Preparing to test memory region 7080e822d000 (16384 bytes)
*** Preparing to test memory region 7080e8238000 (4096 bytes)
*** Preparing to test memory region 7080e8239000 (8192 bytes)
*** Preparing to test memory region 7080e823e000 (4096 bytes)
.O.O.O.O.O.O.O.O.O.O.O.O.O.O
Fast memory test PASSED, however your memory can still be broken. Please run a memory test for several hours if possible.

------ DUMPING CODE AROUND EIP ------
Symbol: _serverPanic (base: 0x4673c0)
Module: /usr/local/bin/redis-server 127.0.0.1:6379 (base 0x400000)
$ xxd -r -p /tmp/dump.hex /tmp/dump.bin
$ objdump --adjust-vma=0x4673c0 -D -b binary -m i386:x86-64 /tmp/dump.bin
------
9709:M 31 Jul 16:20:38.982 # dump of function (hexdump of 439 bytes):
5589f5534889fb4881ece801000084c048898c24480100004c898424500100004c898c245801000074400f298424600100000f298c24700100000f299424800100000f299c24900100000f29a424a00100000f29ac24b00100000f29b424c00100000f29bc24d001000064488b042528000000488984242801000031c0488d8424000200004c8d4c2408488d7c24204989d0b900010000ba010000004889442410488d842430010000be00010000c744240818000000c744240c300000004889442418e858abfbff8b056e002e0085c07505e829f8ffff31c0be88125000bf03000000e8582afcff31c0bec0125000bf03000000e8472afcff488d5424204189e84889d931c0bed0215000bf03000000e82b2afcff31c0be00135000bf03000000e81a2afcff31c0be88125000bf03000000e8092afcffc60425ffffffff78488b842428010000644833042528000000750a4881c4e80100005b5dc3e87fa4fbff6666666666662e0f1f840000000000415741564155415455534881ec8801000048c7070000000048c7470800000000c74710000000008b05cbfa2d0064488b1c252800000048899c247801000031db48897c241885c0

After that I tried to specify bloom filter capacity and set it to 100M. I've added around 70M keys there, but in the end Redis crashed with the same error, however BF.DEBUG shows only 3M in size after the crash:

127.0.0.1:6379> BF.DEBUG my_bloom
1) "size:3070177"
2) "bytes:239626460 bits:1917011675 hashes:14 capacity:100000000 size:3070177 ratio:0.0001"

BF.RESERVE with an error_rate >= 2 will create a very slow bloom filter

This definitely isn't something that should come up "normally", but I happened across today when I accidentally put the arguments to BF.RESERVE in the wrong order. It might also be a possible error if someone misinterprets the documentation about what valid error_rate values should be (for example if they try to get a 5% false positive rate by passing 5 instead of 0.05).

127.0.0.1:6379> BF.RESERVE broken_bloom 2 1000
OK

127.0.0.1:6379> BF.DEBUG broken_bloom
1) "size:0"
2) "bytes:256 bits:2048 hashes:4294967295 capacity:4294965877 size:0 ratio:2"

127.0.0.1:6379> BF.ADD broken_bloom test
(integer) 1
(16.83s)

So it gets set up with using 4 billion hashes, and takes an extremely long time to add any items. I'm sure anyone that does this will realize something is wrong very quickly, but it would probably be best to restrict error_rate to the 0.0 < error_rate < 1.0 range anyway. There are other strange behaviors possible by going outside that range like passing a negative error_rate, and rates from 1.0 up to (but not including) 2.0 have 0 hashes, so 100% false positive rate.

And thanks again for the quick fix on the other issue earlier today, everything seems to be working great now.

Linking is broken on macOS (at least)

Due to an incorrect definition of LD in the Makefile

$ make
...
gcc /Users/itamar/work/redisbloom/src/rebloom.o /Users/itamar/work/redisbloom/contrib/MurmurHash2.o /Users/itamar/work/redisbloom/rmutil/util.o /Users/itamar/work/redisbloom/src/sb.o /Users/itamar/work/redisbloom/src/cf.o /Users/itamar/work/redisbloom/src/rm_topk.o /Users/itamar/work/redisbloom/src/topk.o /Users/itamar/work/redisbloom/src/rm_cms.o /Users/itamar/work/redisbloom/src/cms.o -o /Users/itamar/work/redisbloom/redisbloom.so -dylib -exported_symbol _RedisModule_OnLoad -macosx_version_min 10.6 -lm -lc
clang: error: unknown argument: '-macosx_version_min'
clang: error: no such file or directory: '_RedisModule_OnLoad'
clang: error: no such file or directory: '10.6'
make: *** [/Users/itamar/work/redisbloom/redisbloom.so] Error 1

redis-rebloom docker container will not start

docker run -it -p 6379:6379 redislabs/rebloom:latest gives the following error:

1:C 07 May 15:47:52.309 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 07 May 15:47:52.309 # Redis version=4.0.9, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 07 May 15:47:52.309 # Configuration loaded
.
.-__ ''-._ _.- . . ''-._ Redis 4.0.9 (00000000/0) 64 bit
.- .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in standalone mode |`-._`-...-` __...-.-.|'_.-'| Port: 6379 | -. ._ / _.-' | PID: 1 -._ -._ -./ .-' .-'
|-._-.
-.__.-' _.-'_.-'| | -.
-._ _.-'_.-' | http://redis.io -._ -._-..-'.-' .-'
|-._-.
-.__.-' _.-'_.-'| | -.
-._ _.-'_.-' | -._ -._-.
.-'_.-' _.-'
-._ -..-' _.-'
-._ _.-' -.
.-'

1:M 07 May 15:47:52.311 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 07 May 15:47:52.311 # Server initialized
1:M 07 May 15:47:52.311 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 07 May 15:47:52.311 # Module /var/lib/redis/modules/rebloom.so failed to load: /var/lib/redis/modules/rebloom.so: cannot open shared object file: No such file or directory
1:M 07 May 15:47:52.311 # Can't load module from /var/lib/redis/modules/rebloom.so: server aborting

Docker verison: Version 18.03.0-ce-mac60 (23751)

Build issue on os x

When trying to build on MacOS Mojave 10.14.4 (18E226), I get the following errors:

clang: error: unknown argument: '-macosx_version_min'
clang: error: no such file or directory: '_RedisModule_OnLoad'
clang: error: no such file or directory: '10.6'
make: *** [/Users/kyledavisnew/redis/RedisBloom/rebloom.so] Error 1

calculate about bit size.

When i used bits = (entries * ln(error)) / ln(2)^2 to calculate, I found some problems.
The entries = 10000000 and error_ratio=0.0000001
About need 335477043 bits.
When I use rebloom to execute 'BF.RESERVE bf2 0.0000001 10000000'
Its debug message is 'bytes:67108864 bits:536870912 hashes:24 hashwidth:64 capacity:16003208 size:0 ratio:1e-07' ,Please.

CF.MEXISTS doesn't work

cli exposes it but it doesn't seem like its doing anything

ps: awesome module 👍
Thanks!

OR Operation

Hi,
Is OR Operation of two bloomfilters or merge of two bloomfilters possible?

cuckoo filter used too much memory

Hi, I'm using the cuckoo filter as below:
127.0.0.1:6379[1]> keys *

  1. "psua01"
  2. "dialog_c_token"
  3. "pscs01"
  4. "pslg01"
    127.0.0.1:6379[1]> cf.debug dialog_c_token
    "bktsize:2 buckets:536870912 items:6125119 deletes:0 filters:23"
    127.0.0.1:6379[1]> get psua01
    "1cf4e9bb"
    127.0.0.1:6379[1]> get pscs01
    "15b3a447"
    127.0.0.1:6379[1]> get pslg01
    "72df15a5"

this cuckoo filter consumed 23GB memory,why it ate so much memory?

127.0.0.1:6379> info

Server

redis_version:4.0.0
redis_git_sha1:00000000
redis_git_dirty:0
redis_build_id:d67dbd36e54575ba
redis_mode:standalone
os:Linux 3.10.0-229.el7.x86_64 x86_64
arch_bits:64
multiplexing_api:epoll
atomicvar_api:atomic-builtin
gcc_version:4.8.5
process_id:23143
run_id:8035b890658071e85ee281af622a00c3145e3a0e
tcp_port:6379
uptime_in_seconds:218
uptime_in_days:0
hz:10
lru_clock:505611
executable:/usr/local/bin/redis-server
config_file:/etc/redis/redis.conf

Clients

connected_clients:313
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:220

Memory

used_memory:24709106272
used_memory_human:23.01G
used_memory_rss:24743956480
used_memory_rss_human:23.04G
used_memory_peak:24728498696
used_memory_peak_human:23.03G
used_memory_peak_perc:99.92%
used_memory_overhead:10548714
used_memory_startup:487192
used_memory_dataset:24698557558
used_memory_dataset_perc:99.96%
total_system_memory:33566187520
total_system_memory_human:31.26G
used_memory_lua:37888
used_memory_lua_human:37.00K
maxmemory:25769803776
maxmemory_human:24.00G
maxmemory_policy:noeviction
mem_fragmentation_ratio:1.00
mem_allocator:jemalloc-4.0.3
active_defrag_running:0
lazyfree_pending_objects:0

Persistence

loading:0
rdb_changes_since_last_save:500
rdb_bgsave_in_progress:0
rdb_last_save_time:1527232049
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:-1
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:0
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

Stats

total_connections_received:314
total_commands_processed:26187
instantaneous_ops_per_sec:126
total_net_input_bytes:1474248
total_net_output_bytes:423442
instantaneous_input_kbps:5.58
instantaneous_output_kbps:0.72
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
evicted_keys:0
keyspace_hits:498
keyspace_misses:1
pubsub_channels:3
pubsub_patterns:0
latest_fork_usec:0
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

Replication

role:master
connected_slaves:0
master_replid:a8200ae6c7c0411ad95043e94bf1035270432824
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

CPU

used_cpu_sys:7.11
used_cpu_user:50.34
used_cpu_sys_children:0.00
used_cpu_user_children:0.00

Cluster

cluster_enabled:0

Keyspace

db1:keys=4,expires=0,avg_ttl=0

Time To Live

Hey,

Is it possible to create a filter (BF.RESERVE) with a TTL?

In my case, I want to create a filter for each of the active users, and make sure Redis cleans up least active user filters when needed. https://redis.io/topics/lru-cache

Thanks,
Philippe

BF.ADD + Reserve

It's awesome that BF.ADD doesn't trigger an error for an empty filter using default sizes, however, there are scenarios where you want to create a filter with different parameters. I see two possibilities:

BF.RESERVENX key error_rate size

Reserve a new instance only if it doesn't already exist. If it does exist, just noop.

OR

BF.ADD RESERVE key error_rate size item

If creating a new Bloom filter, reserve it with the specified parameters. After reserved, add item

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.