Coder Social home page Coder Social logo

baotonglu / apex Goto Github PK

View Code? Open in Web Editor NEW
43.0 2.0 6.0 233 KB

High Performance Learned Index on Persistent Memory

License: MIT License

CMake 0.57% Shell 0.30% C++ 99.13%
data-structures database index learned-index nvm oltp persistent-data-structure persistent-index persistent-memory

apex's Introduction

APEX: A High-Performance Learned Index on Persistent Memory

More details are described in our VLDB paper and extended version. If you use our work, please cite:

Baotong Lu, Jialin Ding, Eric Lo, Umar Farooq Minhas, Tianzheng Wang:
APEX: A High-Performance Learned Index on Persistent Memory.
PVLDB 15(3): 597-610 (2022)

Building

Dependencies

We tested our build with Linux Kernel 5.10.11 and GCC 10.2.0. You must ensure that your Linux kernel version >= 4.17 and glibc >=2.29 for proper build.

Compiling

Assuming to compile under a build directory:

git clone https://github.com/baotonglu/apex.git
cd apex
./build.sh

Running benchmark

Persistent memory pool path

Please ensure your PM device is properly configured with App Direct mode and mounted to file system with "DAX" enabled. Change the PM pool path of our allocator to the memory path on your own server before testing.

Benchmark setting

We run the tests in a single NUMA node with 24 physical CPU cores. We pin threads to physical cores compactly assuming thread ID == 2 * core ID (e.g., for a dual-socket system, we assume cores 0, 2, 4, ... are located in socket 0). Check out also the total.sh and run.sh script for example benchmarks and easy testing of the index. It supports the following arguments:

./build/benchmark [OPTION...]

--keys_file               the name of the dataset
--keys_file_type          the reading method for dataset (binary/text/sosd)
--keys_type               the type of the key (double/uint64)
--total_num_keys          total number of keys in the dataset
--init_num_keys           the number of keys to bulk-load before testing
--workload_keys           the number of keys in the workload
--operation               the query type in the workload (insert/search/erase/update/range/mixed)
--insert_frac             the fraction of insert in mixed search-insert workload
--lookup_distribution     the access distribution of the workload (uniform/zipf)
--theta                   the skewness of zipf (e.g.,0.9)
--using_epoch             whether to register epoch in application level: 0/1 
--thread_num              the number of worker threads 
--index                   the name of index to evaluate (apex)
--random_shuffle          whether to do the random shuffle for the dataset
--sort_bulkload           whether sort the keys before bulk-loading

Competitors

Here hosts source codes which are used in comparision with APEX , including LB+-Tree [1], DPTree [2], uTree [3], FPTree [4], BzTree [5] and FAST+FAIR [6].

[1] https://github.com/schencoding/lbtree
[2] https://github.com/zxjcarrot/DPTree-code
[3] https://github.com/thustorage/nvm-datastructure
[4] https://github.com/sfu-dis/fptree
[5] https://github.com/sfu-dis/bztree
[6] https://github.com/DICL/FAST_FAIR

Datasets

Acknowledgements

Our implementation is based on the code of ALEX.

apex's People

Contributors

baotonglu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

apex's Issues

Count error when expanding the root.

Hi.
I'm currently executing your code and found some issues.
When running one of my workloads, it exits with the following error message.

arr_idx = 2805; num_keys = 2707
data capacity = 3674
Count errror!!!

I'm working on a workload which incurs out of bound inserts.
I've tracked that the error occurs when calling the get_max_key() function inside expand_root().
Can you give any suggestions for this situation?

Thanks a lot.

Ju Young

How to insert KV pairs into an empty APEX

Hi,
I try to skip the bulk load and insert some KV pairs into an empty index, but the program seems to enter an endless loop.
The modifications are as follows.

  1. I set the "skip_bulkload“ in src/benchmark/main.cpp to true
  2. I run the "run.sh" with single thread
  3. the benchmark gets stuck after printing some information. The CPU utilization is 100%.
found flag keys_file = longitudes-200M.bin.data
found flag keys_file_type = binary
found flag keys_type = double
found flag init_num_keys = 0
found flag workload_keys = 10000
found flag total_num_keys = 10000
found flag operation = insert
found flag insert_frac = 0
found flag lookup_distribution = uniform
found flag theta = 0.99
found flag using_epoch = 1
found flag thread_num = 1
found flag index = apex
found flag random_shuffle
found flag sort_bulkload = 1
The key type is double
The epoch is used
creating a new pool
pool opened at: 0x7f4480000000
Intial allocator: 1
Recover/Initialize time (ms) = 162.361

I try to insert data to an empty ALEX and it works.

Hope the author could fix the problem. Thanks a lot!

Count error when i change the PROBING_LENGTH to 128

Hi, I am trying to change the parameter:PROBING_LENGTH to 128, however, when i change it, something wrong happens:

arr_idx = 11065; num_keys = 12425
data capacity = 13115
Count errror!!!

Does there anything else needs to be changed? Thanks!

some bugs with bulk_load function

There may be some bugs with bulk_load function, when bulk_load size isn't very great (so the root node will be a leaf node at this time).

int main(int argc, char** argv) {
  size_t bulk_cnt = std::stol(argv[1]);

  Tree<uint64_t, uint64_t>* index = generate_index<uint64_t, uint64_t>();
  auto values = new std::pair<uint64_t, uint64_t>[bulk_cnt];
  for(size_t i = 0; i < bulk_cnt; i++) {
    values[i].first = i;
    values[i].second = i;
  }

  index->bulk_load(values, bulk_cnt);

  for(size_t i = 0; i < bulk_cnt; i++) {
    size_t value;
    if(index->search(i, &value)) {
      if(value != i) std::cout << "Error find: " << i << value << std::endl;
    } else {
      std::cout << "Fail find: " << i << " " << value << std::endl;
    }
  }

  my_alloc::BasePMPool::ClosePool();
  return 0;
}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.