Comments (7)
I'd recommend not spinning multiple instances of cachelib and instead use the pools to partition the memory available. The motivation for this is in-line with what @sjoshi6 brought up. Copying 32 bytes might not be a significant cpu impact compared to leaving fragmented memory that is harder to manage across multiple instances. If you'd like to not use std::string (since it uses heap allocation past 20 bytes and incurs a malloc + copy), you can still allocate memory on stack, copy the contents and wrap the stack memory into a Item::Key to call the find apis as long as the calls happen in the same call stack. Lots of performance critical applications do this trick.
from cachelib.
How do we know the cache is created? Do we only need to check cache == nullptr?
This will throw if there're bad configs. (You will see std::invalid_argument()). Cache is created if the constructor didn't throw. Typically we recommend you create cache as a std::unique_ptr<...>
, so it is easy for you to move it around and destroy.
Is there any easier way to accommodate the overhead?
Can you clarify this question more? What overheads are you referring to?
Is there any better idea to support the feature that same key in different pools?
We currently do not support this. Because we share the index across all memory pools, the keys must be unique. On insertion, theoretically we can allow you to pass us two keys (prefix + actual key) and we just memcpy them into the item memory. However, on lookup, we have to concatenate because we need the key to be (prefix + actual key). And, I suspect it is the lookup that's the most expensive here (which we don't have a good way to solve it).
Have you measured how much perf overhead this is? (If keys are small <15 bytes, this shouldn't incur heap allocation if using std::string). If overhead is too much, you should consider using multiple CacheLib instances instead of cache pools.
from cachelib.
Thank you for the quick reply!
Can you clarify this question more? What overheads are you referring to?
I think we can't directly add pool in the following way
cache->addPool(name1, 30GB); cache->addPool(name2, 15GB)
my understanding is fixed overhead needed to manage cache in cachelib so the memory allocated to two pools is less than 45GB.
Then the way I allocate memory to pool1 and pool2 is
cache->addPool(name1, cache_->getCacheMemoryStats().cacheSize * 30 / (30 + 15))
cache->addPool(name2, cache_->getCacheMemoryStats().cacheSize * 15 / (30 + 15))
but i am not sure if it is a good practice. How do you always set the pool size when multiple pools are needed
Have you measured how much perf overhead this is? (If keys are small <15 bytes, this shouldn't incur heap allocation if using std::string). If overhead is too much, you should consider using multiple CacheLib instances instead of cache pools.
We didn't do the perf test right now but will do it soon. Does using multiple cachelib instances impact the perf or have other disadvantage compared to using multiple cache pools in a Cachelib instance?
from cachelib.
There are some experimental cachelib features which might be very useful for us in the future such as "Automatic pool resizing", "Memory Monitor". I believe we won't be able to leverage them well, if we have multiple cachelib instances in a process.
A typical key-size we encounter is ~32bytes.
from cachelib.
@sathyaphoenix that's a great suggestion, thank you! I have another question on persistent cache. If we want to enable persistent cache, we have to set something as follows.
config.enableCachePersistence(path);
Cache cache(Cache::SharedMemNew, config);
Does this local path store metadata only or store the whole old cache instance? If only storing metadata, could i expect the size of metadata is very small?
from cachelib.
Does this local path store metadata only or store the whole old cache instance? If only storing metadata, could i expect the size of metadata is very small?
It is only storing some metadata, which should be less than a KB. You can have this be on any file system and is not performance critical. All the data and any heap metadata is persisted either through shared memory or on-device. The metadata stored within files in the cache directory is some limited information to recover all other pieces of information.
from cachelib.
@tangliisu I'll close this ticket since the original questions are answered. Please feel free to open a new one if you have any additional questions :)
from cachelib.
Related Issues (20)
- The method config.enableChainedItems() cannot be found. HOT 1
- Some questions in resizing the cachelib pool size HOT 2
- Is there any plan to provide an Java SDK for this cachelib ? HOT 2
- Fail to build dependency fbthrift (with errors reported in fmt) HOT 5
- make clean option for contrib/build.sh HOT 1
- build error about fizz on ubuntu22.04 HOT 2
- CDN trace expected behavior HOT 2
- Enable FDP for CacheBench HOT 26
- qDepth Support for NVM Cache HOT 6
- Questions about trace files when running cachebench HOT 2
- Running simple-cache-example gives an error, flag 'v' was defined more than once HOT 6
- OSS build broken as of May, 2024 -> PRs are all blocked HOT 7
- No build support for Fedora37 OS HOT 4
- failed to build CacheLib following document HOT 2
- Build fails on debian-10 HOT 2
- Segmentation fault while fetching refcount HOT 2
- Minimum Limit For Cache Allocation? HOT 1
- build failed when building dependency 'fbthrift' HOT 3
- Build issue with CacheLib with missing source files HOT 3
- [Seeking Volunteers] Add new builds to CacheLib HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cachelib.