Coder Social home page Coder Social logo

facebook / CacheLib Goto Github PK

View Code? Open in 1sVSCode Editor NEW
681.0 49.0 110.0 15.39 MB

Pluggable in-process caching engine to build and scale high performance services

Home Page:

License: Apache License 2.0

C++ 94.94% Thrift 0.48% Shell 0.71% Gnuplot 0.03% CMake 1.64% JavaScript 0.56% CSS 0.07% Rust 1.58%
performance cpp cache ssd concurrency cache-engine

CacheLib's Introduction



Pluggable caching engine to build and scale high performance cache services. See for documentation and more information.

What is CacheLib ?

CacheLib is a C++ library providing in-process high performance caching mechanism. CacheLib provides a thread safe API to build high throughput, low overhead caching services, with built-in ability to leverage DRAM and SSD caching transparently.

Performance benchmarking

CacheLib provides a standalone executable CacheBench that can be used to evaluate the performance of heuristics and caching hardware platforms against production workloads. Additionally CacheBench enables stress testing implementation and design changes to CacheLib to catch correctness and performance issues.

See CacheBench for usage details and examples.


CacheLib has one single version number facebook::cachelib::kCachelibVersion that can be located at CacheVersion.h. This version number must be incremented when incompatible changes are introduced. A change is incompatible if it could cause a complication failure due to removing public API or requires dropping the cache. Details about the compatility information when the version number increases can be found in the changelog.

Building and installation

CacheLib provides a build script which prepares and installs all dependencies and prerequisites, then builds CacheLib. The build script has been tested to work on CentOS 8, Ubuntu 18.04, and Debian 10.

git clone
cd CacheLib
./contrib/ -d -j -v

# The resulting library and executables:
./opt/cachelib/bin/cachebench --help

Re-running ./contrib/ will update CacheLib and its dependencies to their latest versions and rebuild them.

See build for more details about the building and installation process.


We'd love to have your help in making CacheLib better. If you're interested, please read our guide to contributing


CacheLib is apache licensed, as found in the LICENSE file.

Reporting and Fixing Security Issues

Please do not open GitHub issues or pull requests - this makes the problem immediately visible to everyone, including malicious actors. Security issues in CacheLib can be safely reported via Facebook's Whitehat Bug Bounty program:

Facebook's security team will triage your report and determine whether or not is it eligible for a bounty under our program.

CacheLib's People


jiayuebao avatar agordon avatar leozzx avatar therealgymmy avatar sathyaphoenix avatar michel-slm avatar moakbari avatar vrishal avatar jaesoo-fb avatar tang-jianfeng avatar farnz avatar xiedeyantu avatar igchor avatar jeffreyalien avatar ajaysjoshi1978 avatar alyssaverkade avatar antonk52 avatar alikhtarov avatar dmm-fb avatar thedavekwon avatar dmitryvinn avatar dbagaev avatar hanumanthuh avatar HyunseungLee-Travis avatar meyering avatar kkondaka avatar luciang avatar marksantaniello avatar zpao avatar sethdelliott avatar


 avatar Zhuo Chen avatar Mahmoud Abdelkader avatar Lei Zhang avatar Sai Marpaka avatar  avatar  avatar  avatar fzyzcjy avatar David Martin avatar  avatar Gang Liao avatar Wenzhuo Liu avatar lmatz avatar  avatar  avatar Paul Wood avatar Mingcan Xiang avatar ÂngTsúsiông 洪子翔 avatar  avatar  avatar Hebert  F. Barros avatar Sun Yijiang avatar Vitaley Zaretskey avatar yan.zhang avatar Jestan Nirojan avatar  avatar sun xin avatar YangWeiliang_DeepNova@Deepexi avatar Faith Njoki Muthoni avatar cookcocck avatar  avatar  avatar Zhipeng Jia avatar Aidan Bailey avatar Artem Labazov avatar Ashwin Jayaprakash avatar walnutface avatar Alexandre Lamarre avatar sghost13 avatar Kevin avatar 朱小乔 avatar Jacksoncy avatar sunjin avatar Basel Ajarma avatar  avatar Ravi Gupta avatar  avatar  avatar  avatar  avatar Adrian Stanciu avatar Nopied◎ avatar sterlinm avatar WangTengchuan avatar Shuai Zhang avatar  avatar Matt avatar  avatar  avatar Brandon McMillan avatar Gunzi avatar Feixu Chen avatar Udip Pant avatar José Cage  avatar prudens avatar Michael Tsukerman avatar plat1ko avatar Ismaël Mejía avatar breakliu avatar  avatar  avatar  avatar  avatar Xutong Li avatar  avatar Darwin avatar Arthur A. Bergamaschi avatar Sourav Mishra avatar  avatar  avatar Patrick Organ avatar winway avatar  avatar Li Yichao avatar Lackale avatar Arnaud Mallen avatar  avatar mandaren avatar  avatar  avatar Lion avatar  liqingxin avatar Vitruvius avatar  avatar 刘利波 avatar  avatar zhuokuny avatar William avatar  avatar


Brad Jones avatar  avatar  avatar Takeshi Watanabe avatar Snehal Khandkar avatar  avatar Rajath Prasad avatar 春风化雨细如丝 avatar A. Gordon avatar Sathya Gunasekar avatar Anton Tiurin avatar Jimmy avatar James Cloos avatar vanguard_space avatar Tomasz Paszkowski avatar  avatar Tushar Gohad avatar Bo Liu avatar Murali Vilayannur avatar  avatar Mahesh Balakrishnan avatar Elena Zhao avatar Daniel Wong avatar Feixiong avatar  avatar Hwan Seung Yeo avatar Michał Wysoczański avatar Cami Williams avatar  avatar Eden Zik avatar  avatar Sergei Vinogradov avatar Saurabh Vishwas Joshi avatar Piotr Balcer avatar Adit Gupta avatar  avatar Linz avatar Zixuan avatar 神奇膜法师 avatar  avatar  avatar  avatar Arun Sathiya avatar  avatar Wenbin Zhu avatar Igor Chorążewicz avatar  avatar  avatar  avatar

CacheLib's Issues

Running Error when running the examples

I built cachelib in Ubuntu20.04 with
./contrib/ -d -j -v .
Then I built the examples, e.g. simple_cache with command
Finally, I ran the example, ./build/simple_cache .

But here has the error message:
ERROR: flag 'logtostderr' was defined more than once (in files '/data/scott/programs/CacheLib/cachelib/external/glog/src/' and 'src/').

How can I fix it?

Cache Pool Resizing

Can we dynamically change the size of each cache pool? I was getting confused about whether just the slabs of different size classes are being dynamically adjusted for each pool or can we adjust the size of individual cache pool themselves.

Test failures on OSS platform

Discussed in #61

Originally posted by vicvicg October 11, 2021
When running CacheLib tests following the instructions (, we get different test pass rates depending on the environment, some test failures seem to be intermittent, and we haven’t seen 100% pass rate. Is there a recommended system set up and subset of tests that we can use as an acceptance criteria for code changes?

NvmCacheTests.ConcurrentFills failure :

I0930 19:26:45.051738  8699 BigHash.cpp:110] Reset BigHash
I0930 19:26:45.051754  8699 BlockCache.cpp:611] Reset block cache
/opt/workspace/cachelib/allocator/nvmcache/tests/NvmCacheTests.cpp:385: Failure
Expected: (hdl) != (nullptr), actual: nullptr vs (nullptr)
/opt/workspace/cachelib/allocator/nvmcache/tests/NvmCacheTests.cpp:385: Failure
Expected: (hdl) != (nullptr), actual: nullptr vs (nullptr)

Timer tests failure: This seems like a poorly written test that does not account for timing in code with sleep

Running main() from /opt/workspace/cachelib/external/googletest/googletest/src/
[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from Util
[ RUN      ] Util.TimerTest
/opt/workspace/cachelib/common/tests/TimeTests.cpp:40: Failure
Expected equality of these values:
    Which is: 1487
    Which is: 1484
[  FAILED  ] Util.TimerTest (1487 ms)
[----------] 1 test from Util (1487 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (1487 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] Util.TimerTest


Build error with folly and fmt

I cloned and ran the build script on Ubuntu 18 and 20 but encountered the same error on both.


I also tried it in a docker container with the following Dockerfile to get the same error

FROM ubuntu:18.04
RUN apt-get update && apt install git sudo -y
RUN git clone
CMD ["./contrib/", "-j"]

Ubuntu 18.04 build error

I try to build Cachelib from the source and follows instruction "./contrib/ -j -T"
cmake is 3.20 version
It reports the following error. Need help

CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/tests/AllocatorResizeTypeTest.cpp.o: In function facebook::cachelib::tests::AllocatorTest<facebook::cachelib::CacheAllocatorfacebook::cachelib::TinyLFUCacheTrait >::getRandomNewKey[abi:cxx11](facebook::cachelib::CacheAllocatorfacebook::cachelib::TinyLFUCacheTrait&, unsigned int)':
/home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:26: undefined reference to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)' /home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:28: undefined reference to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)'
CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/tests/AllocatorResizeTypeTest.cpp.o: In function
facebook::cachelib::tests::AllocatorTest<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> >::getRandomNewKey[abi:cxx11](facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>&, unsigned int)': /home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:26: undefined reference to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)'
/home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:28: undefined reference to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)' CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/tests/AllocatorResizeTypeTest.cpp.o: In function facebook::cachelib::tests::AllocatorTest<facebook::cachelib::CacheAllocatorfacebook::cachelib::Lru2QCacheTrait >::getRandomNewKey[abi:cxx11](facebook::cachelib::CacheAllocatorfacebook::cachelib::Lru2QCacheTrait&, unsigned int)':
/home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:26: undefined reference to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)' CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/tests/AllocatorResizeTypeTest.cpp.o:/home/CacheLib/cachelib/../cachelib/allocator/tests/TestBase-inl.h:28: more undefined references to facebook::cachelib::test_util::getRandomAsciiStr[abi:cxx11](unsigned int)' follow
collect2: error: ld returned 1 exit status
allocator/CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/build.make:153: recipe for target 'allocator/allocator-test-AllocatorResizeTypeTest' failed
make[2]: *** [allocator/allocator-test-AllocatorResizeTypeTest] Error 1
CMakeFiles/Makefile2:2630: recipe for target 'allocator/CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/all' failed
make[1]: *** [allocator/CMakeFiles/allocator-test-AllocatorResizeTypeTest.dir/all] Error 2`

Build error

When I run ./contrib/ -j -T, an error is thrown:

[ 94%] Linking CXX shared library
/usr/bin/ld: /usr/local/lib/libfmt.a( relocation R_X86_64_PC32 against symbol `[email protected]@GLIBC_2.2.5' can not be used when making a shared object; recompile with -fPIC
/usr/bin/ld: final link failed: Bad value
collect2: error: ld returned 1 exit status
CMakeFiles/folly.dir/build.make:653: recipe for target '' failed
make[2]: *** [] Error 1
CMakeFiles/Makefile2:173: recipe for target 'CMakeFiles/folly.dir/all' failed
make[1]: *** [CMakeFiles/folly.dir/all] Error 2
Makefile:129: recipe for target 'all' failed
make: *** [all] Error 2 error: make failed error: failed to build dependency 'folly'

I tried -DCMAKE_POSITION_INDEPENDENT_CODE=TRUE and -fPIC and it doesn't work.

Can allocator.find() specify the pool to find?

A dumb question, in the base allocate function, we need to specify the pool_id (cache->allocate(pool_id, key, size)). Does it means we allow different pool has same keys (for example pool1 has key "foo", pool2 can also have key "foo")?

If we allow same key in different pool, in find function, I found we can only find the cache by key.
ItemHandle find(Key key, AccessMode mode = AccessMode::kRead);
I didn't find any find function and find the cache by key and pool_id. Is that not supported?

HybridCache Output


I am using HybridCache to extend the size of the cache to fit the workload.

I am writing fixed size page of 4096 bytes to the cache. However, even when the NVM cache size is set to 50GB, the number of objects in the cache does not exceed 1,958,200 which means total data of 1,958,200*4096 bytes which is around 7.5GB only. I notice similar pattern with the DRAM usage as well where the number of items of size 4096 bytes is lower than what it should be based on the allocation. Is there any allocation setting that I am not setting to optimize for fixed 4096 byte pages which is leading to fragmentation?

This is the configuration that I am using. Please ignore the additional parameters that I have added.

"cache_config": {
"cacheSizeMB": 100,
"minAllocSize": 4096,
"navyBigHashSizePct": 0,
"nvmCachePath": ["/flash/cache"],
"nvmCacheSizeMB": 50000,
"navyReaderThreads": 32,
"navyWriterThreads": 32,
"navyBlockSize": 4096,
"navySizeClasses": [4096, 8192, 12288, 16384]
"test_config": {
"enableLookaside": "true",
"generator": "block-replay",
"numThreads": 1,
"traceFilePath": "/home/pranav/csv_traces/w81-w85.csv",
"traceBlockSize": 512,
"diskFilePath": "/disk/disk.file",
"pageSize": 4096,
"minLBA": 0

== Allocator Stats ==
Items in RAM : 17,711 (100MB allocation should fit more item!)
Items in NVM : 1,958,200 (50GB allocation only fits 2million 4KB pages)
Alloc Attempts: 122,793,861 Success: 100.00%
RAM Evictions : 115,587,253 ( Why is each eviction not being admitted into the cache? If it is then why is items in NVM limited to 1,958,200?)
Cache Gets : 113,169,162

Error Building on debian 11 (bullseye)

I am trying to build cachelib project on a vanilla debian/bullseye system using a trivial patch as below.
However, when building cachelib. It fails with the following error.
-- Configuring incomplete, errors occurred!
See also "/home/dreddy/test/CacheLib.git/build-cachelib/CMakeFiles/CMakeOutput.log".
See also "/home/dreddy/test/CacheLib.git/build-cachelib/CMakeFiles/CMakeError.log".

[email protected]:~/test/CacheLib.git$ git diff contrib/
diff --git a/contrib/ b/contrib/
index 8f6b531..e03abd9 100755
--- a/contrib/
+++ b/contrib/
@@ -53,6 +53,21 @@ build_debian_10()


  • if test -z "$skip_os_pkgs" ; then
  • ./contrib// \
  •  || die "failed to install packages for Debian"
  • fi
  • for pkg in zstd sparsemap fmt folly fizz wangle fbthrift ;
  • do
  • shellcheck disable=SC2086

  • ./contrib/ -i $pass_params "$pkg" \
  •  || die "failed to build dependency '$pkg'"
  • done

if test -z "$skip_os_pkgs" ; then
@@ -129,6 +144,7 @@ DETECTED=

case "$DETECTED" in

  • debian11) build_debian_11 ;;
    debian10) build_debian_10 ;;
    ubuntu18.04) build_ubuntu_18 ;;
    centos8) build_centos_8 ;;
    [email protected]:~/test/CacheLib.git$

gold linker cannot link to folly

I tried to compile cachelib in an internal repo. I met a problem when Linking CXX executable cachebench-util (the last step lol). The error msg is /usr/bin/ error: cannot find -lfolly. It seems the linker gold cannot link to folly. I enable the cmake verbose. Here is the output

[ 99%] Linking CXX executable cachebench-util
cd /opt/cachelib/cachelib/build/cachebench && /usr/local/bin/cmake -E cmake_link_script CMakeFiles/cachebench-util.dir/link.txt --verbose=1
/usr/bin/c++     CMakeFiles/cachebench-util.dir/util/main.cpp.o  -o cachebench-util -Wl,-rpath,"\$ORIGIN/../lib/cachelib/" libcachelib_cachebench.a ../datatype/libcachelib_datatype.a ../allocator/libcachelib_allocator.a ../navy/libcachelib_navy.a ../shm/libcachelib_shm.a ../common/libcachelib_common.a /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ -fuse-ld=gold /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ -Wl,--as-needed /usr/local/lib/ /usr/local/lib/ /usr/lib/x86_64-linux-gnu/ -lfolly /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/local/lib/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ -pthread /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/local/lib/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/libiberty.a -ldl /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ /usr/lib/x86_64-linux-gnu/ -lpthread 
/usr/bin/ error: cannot find -lfolly
collect2: error: ld returned 1 exit status
cachebench/CMakeFiles/cachebench-util.dir/build.make:134: recipe for target 'cachebench/cachebench-util' failed

I can find /usr/local/lib/ And i have already tried to make some changes in cachebench/CMakelist.txt (the lines i comment)

In cachelib/cachebench/CMakeList.txt
add_executable (cachebench main.cpp)
add_executable (cachebench-util util/main.cpp)

FlashShield cache admission

I saw in Appendix C of the paper that a modified FlashShield cache admission policy was implemented and evaluated but I am unable to locate the admission policy in code. Could someone please point me to where I can find it?

Is BigHash.Bucket.Slot.Size and BlockCache.Region.Entry.EntryDesc.KeyHash do not need serialize to disk.

As upper pictures shows, BigHash.Bucket.Slot.Size can be calculated by Slot.BucketEntry.KeySize + Slot.BucketEntry.ValueSize, BlockCache.Region.Entry.EntryDesc.KeyHash can be calculated from BlockCache.Region.Entry.EntryValue.Key. So I think it's do not need to serialize BigHash.Bucket.Slot.Size and BlockCache.Region.Entry.EntryDesc.KeyHash, and can save disk space. Especially BigHash is for smallItem, maybe KeySize and ValueSize is jiust 1 byte, the 4 bytes size overhead is not small.

invalid allocationClassSizeFactor

When I test cachelib in a test cluster, an error is thrown
E0818 21:25:01.105298 14 cachelib_cache_handler.cpp:54] invalid factor 6.93298464824273e-310
it throws from the check from MemoryAllocator.cpp
generateAllocSizes { if (factor <= 1.0) { throw std::invalid_argument(folly::sformat("invalid factor {}", factor)); } }

The way i create cachelib instance is

    Cache::Config config;
    return std::make_unique<Cache>(config);

where cache_size = 76GB.

I didn't set allocationClassSizeFactor anywhere and i think it is defaulted as 1.25? I am not sure what this config (allocationClassSizeFactor) is and why it is 0.

Build error "CacheStats.cpp:51:33: error: conversion from ‘SizeVerify<15968>’ to non-scalar type ‘SizeVerify<16160>’ requested"

Describe the bug
Tried to build cachelib on ubuntu:18.04, ubuntu:latest, and centos:8 docker container, running on Mac M1 MacOS 12. But all failed with error "CacheStats.cpp:51:33: error: conversion from ‘SizeVerify<15968>’ to non-scalar type ‘SizeVerify<16160>’ requested".

To Reproduce
Steps to reproduce the behavior:

  1. docker run -it centos:8 /bin/bash
  2. inside container clone cachelib repo
  3. ./contrib/ -j


Desktop (please complete the following information):

  • Macbook M1
  • Docker version 20.10.10
  • OS: [MacOS 12.0.1]
  • Docker image: ubuntu:18.04, ubuntu:latest, and centos:8

Samples get linked with system and built from source resulting in conflicts

Samples from ./examples/[simple_cache|simple_compact_cache] get linked with system and built from source resulting in conflicts.

Steps to reproduce:

./contrib/ -d -v # Build CacheLib
cd examples/simple_cache/
./ # Build simple_cache example
ldd build/simple-cache-example | grep glog => /lib/x86_64-linux-gnu/ (0x00007f0d95bae000) => /home//CacheLib/opt/cachelib/lib/ (0x00007f0d941ef000)
ERROR: flag 'logtostderr' was defined more than once (in files '/home//CacheLib/cachelib/external/glog/src/' and 'src/').

OS: Ubuntu 20.04 running on WSL2.


Hey there!

I was wondering what the current story around replication happens to be so that one can survive in the midst of failure / higher availability + total throughput - building a prototype of a highly concurrent service.... would require ~30 node cluster assuming a direct port from redis (although, from experimentation, cachelib will have less memory footprint)

CacheLib won't build on Centos 8.1 with kernel 5.6.13-0_fbk6_4203_g4cb46d044bc6

I have modified the NandWrites.cpp file, wdcWriteBytes function to support getting the bytes written for WDC drives and having issues building the CacheLib executable. The NandWrites.txt attached file contains the changes made to NandWrites.cpp file needed to support WDC drives.

The gmake is failing with the following errors:
CMakeFiles/cmTC_78ecd.dir/src.c.o: In function main': src.c:(.text+0x2f): undefined reference to pthread_create'
src.c:(.text+0x3b): undefined reference to pthread_detach' src.c:(.text+0x47): undefined reference to pthread_cancel'
src.c:(.text+0x58): undefined reference to `pthread_join'
collect2: error: ld returned 1 exit status

See the attached log and out files in the file for more details.

Build on centos8 failed

/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to X509_STORE_CTX_get_chain' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to sk_pop_free'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to SSL_CTX_get_ex_new_index' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to SSL_load_error_strings'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to EVP_cleanup' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_set_dynlock_lock_callback'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to sk_num' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to sk_value'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_THREADID_set_callback' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_add_lock'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_cleanup_all_ex_data' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to SSL_get_ex_new_index'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to HMAC_CTX_cleanup' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to HMAC_CTX_init'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to EVP_MD_CTX_cleanup' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to OpenSSL_add_all_ciphers'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to SSLv23_method' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to OpenSSL_add_all_digests'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_set_dynlock_destroy_callback' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to ERR_free_strings'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to EVP_MD_CTX_init' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to SSL_library_init'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to OPENSSL_add_all_algorithms_noconf' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_THREADID_set_numeric'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_set_dynlock_create_callback' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to ERR_load_crypto_strings'
/opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_set_locking_callback' /opt/rh/gcc-toolset-10/root/usr/bin/ld: ../../../ undefined reference to CRYPTO_num_locks'
collect2: error: ld returned 1 exit status
make[2]: *** [folly/logging/example/CMakeFiles/logging_example.dir/build.make:126: folly/logging/example/logging_example] Error 1
make[2]: Leaving directory '/code/CacheLib/build-folly'
make[1]: *** [CMakeFiles/Makefile2:357: folly/logging/example/CMakeFiles/logging_example.dir/all] Error 2
make[1]: *** Waiting for unfinished jobs....

Some questions on cachelib allocator, pool and pool size

  1. when constructing allocator facebook::cachelib::LruAllocator cache(config), does this throw any errors/exceptions? How do we know the cache is created? Do we only need to check cache == nullptr?
  2. another question is on pool size accommodation. For example, when i try to give cache 45GB and split the cache into two segments (pools), each of which has 30 GB and 15 GB. I have to do the following to assign the pool_size
    cache->addPool(name1, cache_->getCacheMemoryStats().cacheSize * 30 / (30 + 15))
    cache->addPool(name2, cache_->getCacheMemoryStats().cacheSize * 15 / (30 + 15))
    Is there any easier way to accommodate the overhead?
  3. In our system, we prefer the same key can exist in different pools. One way to do it might be adding prefix (pool index) to key name. But this string concatenation happens in every put/get operation, which may degrade the performance. Is there any better idea to support the feature that same key in different pools?

Does cachelib support persist cache by call specific API?

Currently, cachelib could persist cache by call shutdown api. But if the process is killed by 'kill -9', then no signal could be catch by the process so that could not call shutdown api.

Does cachelib have an persistence API and the application could call it periodically? And could load the cache even if the process is killed by 'kill -9'? The cache is not up to date is accepted.

Adaptive TinyLFU


Simplify deployment requirements by reducing the need to select the best eviction policy, which is workload dependent and can change over the lifetime of an application. Instead, use simple algorithmic techniques to dynamically reconfigure the cache for the current workload. This allows the cache to robustly make near optimal eviction choices in a wider range of workload patterns.


The TinyLFU implementation uses a static configuration between the tiny / main regions. Per the paper, this is defaulted to 1% / 99% to favor frequency-biased workloads, such as databases and search engines. However, some workloads are highly skewed towards recency such as blockchain mining and social networks. See for example Twitter's data sets where LRU is near optimal. In such cases the frequency filter can degrade the hit ratio, as shown below.


The implementation states that this static parameter does not need to be tuned. While some users might realize their workload bias and choose a different eviction policy, ideally the algorithm is intelligent to discover the optimal setting. In the Caffeine library, this is done by using hill climbing (short article, paper).

Suggested Approach

Use simple hill climbing to guess at a new configuration, sample the hit rate, calculate a new step size based on if the change was an improvement, adjust, and repeat. In Caffeine the initial step size is 6.25% and it decays at a rate of 0.98 so that the policy converges (rather than oscillates around) the best configuration. This process restarts if the hit rate changes by 5% or more. The sample period should be large enough to avoid a noisy hit rate and can piggyback on the reset interval for decaying the access frequency counters. As shown below, this approach can handle highly skewed workloads that change with the environment.


Success Metrics

  • Achieve a high hit ratio in LRU-biased workloads
  • Achieve a high hit ratio in LFU / MRU biased workloads, like databases and search engines.
  • Dynamically reconfigure when the workload pattern changes, throughout the lifetime of the cache, to optimize for the new environment.

Additional Suggestions

  1. By default, the CountMinSketch uses uint32 counters. This can be reduced to 4-bit counters without degrading the hit rate, as the goal is to compare entries to determine if one is hotter than the other. That can be further reduced by incorporating the doorkeeper mechanism, a bloom filter, so that fewer counters are needed.
  2. The concurrency model requires locking to update the eviction policy. If embedded as a local cache this might observe contention, hence a tryLock is used. A ring buffer to sample the access history is a more efficient way to this as it reduces the CASes required (short article).
  3. The item's allocation size can be leveraged by the admission policy to improve the hit rate (paper).
  4. Incorporate jitter if the policy is subject to hash flooding attacks.
  5. Simulate workloads against the Java implementation to validate that the policy achieves a similar hit rate.

rust bindings fail to build

Describe the bug
Rust bindings fail to build.

To Reproduce
Steps to reproduce the behavior:

  1. Change into cachelib/rust within this repo (crate root)
  2. Run cargo build
  3. See error

Expected behavior
Expected the crate to build


CacheLib/cachelib/rust$ cargo build --release
    Updating index
  Downloaded cxx-build v1.0.57
  Downloaded scratch v1.0.0
  Downloaded cxxbridge-macro v1.0.57
  Downloaded anyhow v1.0.51
  Downloaded cc v1.0.72
  Downloaded thiserror-impl v1.0.30
  Downloaded thiserror v1.0.30
  Downloaded once_cell v1.8.0
  Downloaded bytes v1.1.0
  Downloaded serde v1.0.130
  Downloaded cxxbridge-flags v1.0.57
  Downloaded link-cplusplus v1.0.6
  Downloaded proc-macro2 v1.0.33
  Downloaded syn v1.0.82
  Downloaded libc v0.2.109
  Downloaded cxx v1.0.57
  Downloaded codespan-reporting v0.11.1
  Downloaded abomonation v0.7.3
  Downloaded 18 crates (1.5 MB) in 0.70s
   Compiling proc-macro2 v1.0.33
   Compiling unicode-xid v0.2.2
   Compiling cc v1.0.72
   Compiling syn v1.0.82
   Compiling scratch v1.0.0
   Compiling termcolor v1.1.2
   Compiling cxxbridge-flags v1.0.57
   Compiling unicode-width v0.1.9
   Compiling lazy_static v1.4.0
   Compiling serde v1.0.130
   Compiling libc v0.2.109
   Compiling anyhow v1.0.51
   Compiling abomonation v0.7.3
   Compiling once_cell v1.8.0
   Compiling codespan-reporting v0.11.1
   Compiling link-cplusplus v1.0.6
   Compiling cxx v1.0.57
   Compiling quote v1.0.10
   Compiling bytes v1.1.0
   Compiling cxx-build v1.0.57
   Compiling thiserror-impl v1.0.30
   Compiling cxxbridge-macro v1.0.57
   Compiling thiserror v1.0.30
   Compiling cachelib v0.1.0 (/home/brian/CacheLib/cachelib/rust)
The following warnings were emitted during compilation:

warning: /home/brian/CacheLib/cachelib/rust/target/release/build/cachelib-bbeb85d31435509b/out/cxxbridge/sources/cachelib/src/ fatal error: cachelib/rust/src/cachelib.h: No such file or directory
warning:  #include "cachelib/rust/src/cachelib.h"
warning:           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
warning: compilation terminated.

error: failed to run custom build command for `cachelib v0.1.0 (/home/brian/CacheLib/cachelib/rust)`

Desktop (please complete the following information):

  • OS: Debian 9 / 10 / 11
  • Rust: 1.56.1 (current stable)

Additional context
Rust crate is currently not tested in CI. Adding to CI would help make sure it builds and continues to build as changes are made.

Debian Error

Build error for debian in docker - reproducible across debian 10 / 11 / debian docker branches

CMake Error at cachebench/CMakeLists.txt:15 (add_library):
  Target "cachelib_cachebench" links to target "ZLIB::ZLIB" but the target
  was not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at cachebench/CMakeLists.txt:15 (add_library):
  Target "cachelib_cachebench" links to target "ZLIB::ZLIB" but the target
  was not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at common/CMakeLists.txt:17 (add_library):
  Target "cachelib_common" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at common/CMakeLists.txt:17 (add_library):
  Target "cachelib_common" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at shm/CMakeLists.txt:17 (add_library):
  Target "cachelib_shm" links to target "ZLIB::ZLIB" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?

CMake Error at shm/CMakeLists.txt:17 (add_library):
  Target "cachelib_shm" links to target "ZLIB::ZLIB" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?

CMake Error at navy/CMakeLists.txt:17 (add_library):
  Target "cachelib_navy" links to target "ZLIB::ZLIB" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?

CMake Error at navy/CMakeLists.txt:17 (add_library):
  Target "cachelib_navy" links to target "ZLIB::ZLIB" but the target was not
  found.  Perhaps a find_package() call is missing for an IMPORTED target, or
  an ALIAS target is missing?

CMake Error at allocator/CMakeLists.txt:26 (add_library):
  Target "cachelib_allocator" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at allocator/CMakeLists.txt:26 (add_library):
  Target "cachelib_allocator" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at datatype/CMakeLists.txt:17 (add_library):
  Target "cachelib_datatype" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

CMake Error at datatype/CMakeLists.txt:17 (add_library):
  Target "cachelib_datatype" links to target "ZLIB::ZLIB" but the target was
  not found.  Perhaps a find_package() call is missing for an IMPORTED
  target, or an ALIAS target is missing?

To Reproduce
Steps to reproduce the behavior:
docker run --rm -it -v $(pwd):/tower
git clone {cachelib}
./contrib/ -d -j -v

Supporting LIRS replacement algorithm in CacheLib

We has implemented the LIRS replacement in the cachelib source code and use cachebench to read the traces and generate the results.

The traces in the test are from CloudPhysics and were collected in caching analytics service in production VMware environments (As the trace files only contain block number, we set enableLookaside to true, and the value size is fixed at 1KB). Some trace files can be downloaded at here

Attached are LIRS's miss ratio curves in comparison with replacement algorithms currently in the cachelib, including LRU, LRU2Q, and TinyLFU (All algorithms simply adopt their default settings).

The results are very encouraging: where cachelib's existing algorithms don't perform well as expected (miss ratio curve don't decrease with increase of cache size), LIRS mostly fixes the performance issue. Some of the results are attached. Please take a look.


The LIRS's source code for cachelib can be found at, (in branch 'LIRS". Current implementation is a prototype, so it may not pass the unit-test in multithreading tests.)

To reproduce the results, you may follow these steps:

git clone --recurse-submodules
cd CacheLib
git checkout LIRS
# external lib branch when testing
# fbthrift: 0b008a959
# fizz: b5cd3b59
# fmt: 094b66e8
# folly: c4d6fcdde
# gflags: e171aa2
# glog: 8f9ccfe
# googletest: 1b18723e
# sparsemap: 1186290
# wangle: 9f706148
# zstd: a488ba11
./contrib/ -j
cp cachelib/cachebench/config.json ./build-cachelib/cachebench
cp cachelib/cachebench/ ./build-cachelib/cachebench
cd build-cachelib/cachebench
# before running the script, please modify the `trace_path` parameter in ``

We intend to contribute to the cachelib community by hopefully merging lirs into cachelib. To this end, we may continue to improve the lirs implementation by hearing from you and the rest of the community.

Ubuntu 20.04 LTS build error


Build error when building cachelib using the provided script

./contrib/ -d -j -T -v

Compiler versions

  • cmake: v3.23.0-rc2
  • gcc/g++: v9.3.0

Error log

The log is quite big and the error is always the same error: expression cannot be used as a function. Here's an example:

In file included from /home/gsd/CacheLib/cachelib/../cachelib/allocator/CCacheAllocator.h:20,
                 from /home/gsd/CacheLib/cachelib/../cachelib/allocator/CCacheManager.h:19,
                 from /home/gsd/CacheLib/cachelib/../cachelib/allocator/CacheAllocator.h:38,
                 from /home/gsd/CacheLib/cachelib/allocator/CacheAllocator.cpp:17:
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h: In constructor ‘facebook::cachelib::CCacheMetadata::CCacheMetadata(const SerializationType&)’:
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h:47:34: error: expression cannot be used as a function
   47 |       : keySize_(*object.keySize()), valueSize_(*object.valueSize()) {}
      |                                  ^
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h:47:67: error: expression cannot be used as a function
   47 |       : keySize_(*object.keySize()), valueSize_(*object.valueSize()) {}
      |                                                                   ^
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h: In member function ‘facebook::cachelib::CCacheMetadata::SerializationType facebook::cachelib::CCacheMetadata::saveState()’:
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h:68:21: error: expression cannot be used as a function
   68 |     *object.keySize() = keySize_;
      |                     ^
/home/gsd/CacheLib/cachelib/../cachelib/compact_cache/allocators/CCacheAllocatorBase.h:69:23: error: expression cannot be used as a function
   69 |     *object.valueSize() = valueSize_;
      |                       ^
In file included from /home/gsd/CacheLib/cachelib/../cachelib/allocator/CacheAllocatorConfig.h:28,
                 from /home/gsd/CacheLib/cachelib/../cachelib/allocator/CacheAllocator.h:40,
                 from /home/gsd/CacheLib/cachelib/allocator/CacheAllocator.cpp:17:
/home/gsd/CacheLib/cachelib/../cachelib/allocator/MM2Q.h: In constructor ‘facebook::cachelib::MM2Q::Config::Config(facebook::cachelib::MM2Q::SerializationConfigType)’:
/home/gsd/CacheLib/cachelib/../cachelib/allocator/MM2Q.h:74:46: error: expression cannot be used as a function
   74 |         : Config(*configState.lruRefreshTime(),
      |                                              ^
/home/gsd/CacheLib/cachelib/../cachelib/allocator/MM2Q.h:76:45: error: expression cannot be used as a function
   76 |                  *configState.updateOnWrite(),
      |                                             ^
/home/gsd/CacheLib/cachelib/../cachelib/allocator/MM2Q.h:80:46: error: expression cannot be used as a function
   80 |                  *configState.hotSizePercent(),
      |                                              ^
/home/gsd/CacheLib/cachelib/../cachelib/allocator/MM2Q.h:81:47: error: expression cannot be used as a function
   81 |                  *configState.coldSizePercent()) {}

Thank you for your time.

Rebalancing errors leading to a crash

Hi. I'm getting weird errors while working with CacheLib. The cache creation and first operations complete with no issues, but after a few seconds I start seeing a series of reports such as: E0108 16:48:53.001791 315636 PoolRebalancer.cpp:50] Rebalancing interrupted due to exception: PoolId 80 is not a compact cache, eventually ending with a segfault at: facebook::cachelib::MemoryPoolManager::getPoolById(signed char) const+0x20

This smells like a memory issue, but it's consistent so I don't think it's some random memory overwrite. Can you spot the root cause?


  1. This exact issue happens on 3 machines - RHEL 7, RHEL 8, and Ubuntu 20. I know the RHELs aren't supported but I was able to build and the issue is consistent.
  2. I wrap CacheLib in JNI so I can benchmark it using YCSB (which is the tool we use for benchmarking other caches). CacheLib use is pretty straightforward, see code below.
  3. If I build the JNI wrapper using a cmake config similar to the CacheLib examples the build succeeds but some symbols are missing at launch. I therefore link CacheLib's libraries in the cmake, not sure why my case is different from the example and if this is relevant. See cmake below.
  4. CacheLib's code is from a few days ago.

My CacheLib client code (snippets from the cachelib_api.cpp referred to by cmake):

Cache::NvmCacheConfig nvmConfig;
nvmConfig.navyConfig.setSimpleFile(cacheFile, fileCapacity, true);
nvmConfig.navyConfig.setReaderAndWriterThreads(readerThreads, writerThreads);
CacheConfig config;
Cache* cache = new Cache(config);
long cacheSize = cache->getCacheMemoryStats().cacheSize;
defaultPool = cache->addPool("default", cacheSize);

optional<string> get(Cache* cache, const string& key)
    auto val = cache->find(key);
    if (!val)
        return {};
    return string(reinterpret_cast<const char*>(val->getMemory()), val->getSize());

bool put(Cache* cache, const string& key, const string& value)
    auto handle = cache->allocate(defaultPool, key, value.size());
    if (!handle)
        return false;
    std::memcpy(handle->getMemory(),, value.size());
    return true;

CMakeLists.txt looks as follows:

cmake_minimum_required(VERSION 3.17)
find_package(JNI REQUIRED)
set(SOURCE_FILES src/main/cpp/cachelib_api.cpp)
find_package(cachelib CONFIG REQUIRED)
add_library(cachelib_api SHARED ${SOURCE_FILES})
target_link_libraries(cachelib_api PUBLIC

Any plan to support cross platform like ubuntu20.04 centos 7 and OSX?

Is your feature request related to a problem? Please describe.
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]

Describe the solution you'd like
A clear and concise description of what you want to happen.

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

Example directory not found in the cachelib repo.


As I'm learning Cachelib, I would like to play with some examples. And from the documentation, there is a directory named cachelib/examples. But I could not find it in this repo.

I wonder is it possible to also open source that directory. Any privacy concerns?

Thank you.

build error with Fedora33

Describe the bug
A clear and concise description of what the bug is.
build error after follow cachelib steps

To Reproduce
Steps to reproduce the behavior:

  1. Go to '...'
  2. Click on '....'
  3. Scroll down to '....'
  4. See error
/root/wayne/cachelib/CacheLib/cachelib/../cachelib/cachebench/cache/ItemRecords.h:80:7:   required from ‘bool facebook::cachelib::cachebench::ItemRecords<Allocator>::validate(const DestructorData&) [with Allocator = facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>; facebook::cachelib::cachebench::ItemRecords<Allocator>::DestructorData = facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>::DestructorData]’
/root/wayne/cachelib/CacheLib/cachelib/../cachelib/cachebench/cache/Cache-inl.h:105:33:   required from ‘facebook::cachelib::cachebench::Cache<Allocator>::Cache(const facebook::cachelib::cachebench::CacheConfig&, facebook::cachelib::cachebench::Cache<Allocator>::ChainedItemMovingSync, std::string) [with Allocator = facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>; facebook::cachelib::cachebench::Cache<Allocator>::ChainedItemMovingSync = std::function<std::unique_ptr<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>::SyncObj, std::default_delete<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>::SyncObj> >(facebook::cachelib::KAllocation::Key)>; std::string = std::__cxx11::basic_string<char>]’
/usr/include/c++/10/bits/unique_ptr.h:962:30:   required from ‘typename std::_MakeUniq<_Tp>::__single_object std::make_unique(_Args&& ...) [with _Tp = facebook::cachelib::cachebench::Cache<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> >; _Args = {facebook::cachelib::cachebench::CacheConfig&, std::function<std::unique_ptr<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>::SyncObj, std::default_delete<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>::SyncObj> >(facebook::cachelib::KAllocation::Key)>&}; typename std::_MakeUniq<_Tp>::__single_object = std::unique_ptr<facebook::cachelib::cachebench::Cache<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> > >]’
/root/wayne/cachelib/CacheLib/cachelib/../cachelib/cachebench/runner/CacheStressor.h:96:38:   required from ‘facebook::cachelib::cachebench::CacheStressor<Allocator>::CacheStressor(facebook::cachelib::cachebench::CacheConfig, facebook::cachelib::cachebench::StressorConfig, std::unique_ptr<facebook::cachelib::cachebench::GeneratorBase>&&) [with Allocator = facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait>]’
/usr/include/c++/10/bits/unique_ptr.h:962:30:   required from ‘typename std::_MakeUniq<_Tp>::__single_object std::make_unique(_Args&& ...) [with _Tp = facebook::cachelib::cachebench::CacheStressor<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> >; _Args = {const facebook::cachelib::cachebench::CacheConfig&, const facebook::cachelib::cachebench::StressorConfig&, std::unique_ptr<facebook::cachelib::cachebench::GeneratorBase, std::default_delete<facebook::cachelib::cachebench::GeneratorBase> >}; typename std::_MakeUniq<_Tp>::__single_object = std::unique_ptr<facebook::cachelib::cachebench::CacheStressor<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> >, std::default_delete<facebook::cachelib::cachebench::CacheStressor<facebook::cachelib::CacheAllocator<facebook::cachelib::LruCacheTrait> > > >’
/root/wayne/cachelib/CacheLib/cachelib/cachebench/runner/Stressor.cpp:166:60:   required from here
/root/wayne/cachelib/CacheLib/opt/cachelib/include/fmt/core.h:1715:7: error: static assertion failed: Cannot format an argument. To make type T formattable provide a formatter<T> specialization:
 1715 |       formattable,
      |       ^~~~~~~~~~~
make[2]: *** [cachebench/CMakeFiles/cachelib_cachebench.dir/build.make:173: cachebench/CMakeFiles/cachelib_cachebench.dir/runner/IntegrationStressor.cpp.o] Error 1
make[2]: *** [cachebench/CMakeFiles/cachelib_cachebench.dir/build.make:212: cachebench/CMakeFiles/cachelib_cachebench.dir/runner/Stressor.cpp.o] Error 1
make[2]: Leaving directory '/root/wayne/cachelib/CacheLib/build-cachelib'
make[1]: *** [CMakeFiles/Makefile2:698: cachebench/CMakeFiles/cachelib_cachebench.dir/all] Error 2
make[1]: Leaving directory '/root/wayne/cachelib/CacheLib/build-cachelib'
make: *** [Makefile:149: all] Error 2 error: make failed error: failed to build cachelib
[email protected]:~/wayne/cachelib/CacheLib# 

Expected behavior
A clear and concise description of what you expected to happen.
no build error

If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]fedora33 server
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Smartphone (please complete the following information):

  • Device: [e.g. iPhone6]
  • OS: [e.g. iOS8.1]
  • Browser [e.g. stock browser, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

FAQ RocksDB vs Cachelib

why FB did not add cache logic inside RocksDB, instead create another CacheLib. both are K-V interface. what is the best practice for these two products.

Could cachelib use Apache Thrift instead of FBThrift?

We want to use CacheLib with apache arrow, but CacheLib depends on FBThrift while Apache Arrow depends on Apache Thrift. There is duplicate symbol error during link, like this:

ld.lld: error: duplicate symbol: apache::thrift::transport::TSocket::peek()

I think it is better to depend on Apache Thrift so that many apps could use it.
Am I right?

fields are missing in metadata when restarting

When i enable the persistent cache by following the wiki, i meet the following error message
Memory Pool Manager can not be restored, nextPoolId is not set
when trying to attach the old cache instance
cache = std::make_unique<Cache>(Cache::SharedMemAttach, config);
Some more info:

  1. when i shut down cachelib,
    auto res = cache.shutDown();
    the metadata is successfully saved since
    res == Cache::ShutDownStatus::kSuccess
  2. I check the metadata file (name is "metadata") after shutting down. The file is not empty and it contains all of keys "shm_cache, shm_chained_alloc_hash_table, shm_hash_table, shm_info" followed by the serialized iobuf.
  3. when i restart cachelib, before creating a cache instance by
    cache = std::make_unique<Cache>(Cache::SharedMemAttach, config);
    I checked the metadata file again. It is not empty (the size is 149B)
  4. The shared memory size is consistent before/after we shut down the cachelib so the shared memory is not released.

It seems nextPoolId is not serialized into metadata. Do you have any suggestion here?

Build error on CentOS Stream 8

Describe the bug
CacheLib fails to build using the official build script

To Reproduce
Steps to reproduce the behavior:

  1. Install distrobox
  2. Create a CentOS Stream 8 container: distrobox create --image --name c8s-cachelib
  3. On the host, check out CacheLib. Tested on commit a6882d8 (Feb 24)
  4. distrobox enter c8s-cachelib
  5. On the container, ./contrib/ -d -j -v per the instructions in

Expected behavior
This should build


Desktop (please complete the following information):

  • OS: CentOS Stream 8
$ rpm -q centos-stream-release

Additional context
I tried this to triage the build issue with the latest CacheLib in Fedora. Back in December, this builds fine: but now it fails with the same errors.

item allocation and slab rebalance problem

I am running some benchmarks and I have noticed that sometimes it crashes due to unable to get memory for an allocation. This happens because the corresponding slab class has no slabs.

The current cachelib handles this situation by returning an empty item_handle and asynchronously wake up rebalancer, which is confusing to me.
Should it wait for the rebalancer? (or maybe in the doc, we can state that it is possible that allocation may not succeed and how to test it).

Besides, it seems the ItemHandle == overloading is not correct
cachelib/allocator/Handle.h: 105
a == nullp should be a.get() == nullp
the same problem happens in the same file. If this bug is confirmed, I can send a pull.

no resetToHandle found.

Summary of Issue or Feature request

the function resetToItemHandle in FixedSizeArray.h calls resetToHandle, but there is no such function in this project。Should it be resetToItemHandle (in TypedHandle.h) ?


Should it be resetToItemHandle (in TypedHandle.h) ?


Suggested Approach

Success Metrics

Segmentation Fault when trying to run Cachebench

I was able add support for Western Digital drives by adding the wdcWriteBytes function to the NandWrites.cpp part. I believe that code is working as expected but we're getting a Segmentation Fault when trying to run cachebench. We’re using a json file that we got from Facebook but not sure if that’s what we should be using or if it needs some updates. I’ve attached it along with the runtime log to the issue. We are new to the cachebench app so hopefully it’s something simple with our config or setup.


Clean cached data quickly


We are using cachelib in a storage system with RAFT consensus. We are thinking of a way to invalidate all cache items in the cache quickly on partition leader change, to avoid cache incoherence. Since it's leader change, the callback must finish quickly, maybe 1s.

I am wondering whether cachelib can (1) support removing all keys in the cache with one function call (2) do this quickly. Our goal is just to make sure those keys are not found when calling cache.find(). It may be achieved by clearing all meta data or data in cachelib.


Multiple cachelib instances or multiple pools

Hi, I have a question on creating cache using CacheLib.
In our scenario, we want to set up multiple caches for different objects in a single daemon. I read from the CacheLib website:

If you're using multiple cache instances, please consider if you can simplify to using a single instance via multiple cache pools. CacheLib is highly optimized for concurrency; using multiple cachelib instances is typically an anti-pattern unless you have a good reason.

If I understand correctly, if I set them as pools of the same cache instance, I can achieve a better performance than setting them as individual cache instance. Am I correct? But in this case, these pools have to use the same eviction policy, as they are added to the same cache trait (e.g., LruCacheTrait, Lru2QCacheTrait, etc.). What if I want them to have configurable eviction policies? Please advise. Thanks!

An example proposal

New Feature Foo


Increase flash space efficiency


This is related to project Bar. It addresses a key gap in its design.


We will add an additional component between X and Y. We will introduce a new API called Hoop.

Success Metrics

  1. No regression on cpu
  2. No regression on read and write latency to flash device
  3. Increase space density by 20%
  4. Added a new metrics here

NVM cache replacement policy

I had a question regarding the replacement policy of NVM cache.

When an item in NVM cache is accessed, it is moved to DRAM. Does this leave a copy of that item in NVM?

If it does leave a copy then how does it make space for the evicted item from DRAM that has to be admitted?

Furthermore, how does it make a difference to select FIFO or LRU? Since after every hit the item is being moved to DRAM, the order of items evicted would be the same in LRU and FIFO.

cross-platform install integeraged to vcpkg

Thank you for this excellent work. I have read the docs of CacheLib, and want to use as In-process mutithread read-write cache. But I failed install on macos and centos7 system.
Can you give a vcpkg cross-platform install solution?

Question: how cachelib recover cache by its metadata

When cachelib backups metadata during shutdown, it seems the size of metadata is quite small. Why can cachelib recover the whole cache based on the small size of metadata? Could you introduce more idea on the metadata and its recovery?

CI failures


I noticed that the CI on main branch and on PRs is failing. Would you be open to integrating CI from our fork (pmem/CacheLib)?

Right now we are building CacheLib and running (almost) all tests on each PR/push on GHA. To speed up things, we are using docker images with pre-installed dependencies. You can see our workflow here: In our system, we can trigger docker rebuild and push it to the registry once we need to update the dependencies.

An typo in CacheLib docs

There is an typo in

string data("new data");
// Allocate memory for the data.
auto item_handle = cache->allocate(pool_id, "key2", data.size());

// Write the data to the cache.
std::memcpy(handle->getMemory(),, data.size());

There is no handle but item_handle. Therefore, std::memcpy(handle->getMemory(),, data.size()); should be changed to std::memcpy(item_handle->getMemory(),, data.size());

cmake unable to find Folly when building cachelib


I dont' know much about compilation and have a dumb question related to building cachelib. I have no issue of building cachelib with the built in script. But when we tried to build cachelib in our own cmake, which will build our selected git tag of different libraries, there is an error:


And here is our cachelib.cmake for the reference:


Can you help shed a light on this issue? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.