Coder Social home page Coder Social logo

hat-trie's Introduction

CI

A C++ implementation of a fast and memory efficient HAT-trie

Trie implementation based on the "HAT-trie: A Cache-conscious Trie-based Data Structure for Strings." (Askitis Nikolas and Sinha Ranjan, 2007) paper. For now, only the pure HAT-trie has been implemented, the hybrid version may arrive later. Details regarding the HAT-trie data structure can be found here.

The library provides an efficient and compact way to store a set or a map of strings by compressing the common prefixes. It also allows to search for keys that match a prefix. Note though that the default parameters of the structure are geared toward optimizing exact searches, if you do a lot of prefix searches you may want to reduce the burst threshold through the burst_threshold method.

It's a well adapted structure to store a large number of strings.

For the array hash part, the array-hash project is used and included in the repository.

The library provides two classes: tsl::htrie_map and tsl::htrie_set.

Overview

  • Header-only library, just add the include directory to your include path and you are ready to go. If you use CMake, you can also use the tsl::hat_trie exported target from the CMakeLists.txt.
  • Low memory usage while keeping reasonable performances (see benchmark).
  • Support prefix searches through equal_prefix_range (useful for autocompletion for example) and prefix erasures through erase_prefix.
  • Support longest matching prefix searches through longest_prefix.
  • Support for efficient serialization and deserialization (see example and the serialize/deserialize methods in the API for details).
  • Keys are not ordered as they are partially stored in a hash map.
  • All operations modifying the data structure (insert, emplace, erase, ...) invalidate the iterators.
  • Support null characters in the key (you can thus store binary data in the trie).
  • Support for any type of value as long at it's either copy-constructible or both nothrow move constructible and nothrow move assignable.
  • The balance between speed and memory usage can be modified through the max_load_factor method. A lower max load factor will increase the speed, a higher one will reduce the memory usage. Its default value is set to 8.0.
  • The default burst threshold, which is the maximum size of an array hash node before a burst occurs, is set to 16 384 which provides good performances for exact searches. If you mainly use prefix searches, you may want to reduce it to something like 1024 or lower for faster iteration on the results through the burst_threshold method.
  • By default the maximum allowed size for a key is set to 65 535. This can be raised through the KeySizeT template parameter.

Thread-safety and exception guarantees are similar to the STL containers.

Hash function

The default hash function used by the structure depends on the presence of std::string_view. If it is available, std::hash<std::string_view> is used, otherwise a simple FNV-1a hash function is used to avoid any dependency.

If you can't use C++17 or later, we recommend to replace the hash function with something like CityHash, MurmurHash, FarmHash, ... for better performances. On the tests we did, CityHash64 offers a ~20% improvement on reads compared to FNV-1a.

#include <city.h>

struct str_hash {
    std::size_t operator()(const char* key, std::size_t key_size) const {
        return CityHash64(key, key_size);
    }
};

tsl::htrie_map<char, int, str_hash> map;

The std::hash<std::string> can't be used efficiently as the structure doesn't store any std::string object. Any time a hash would be needed, a temporary std::string would have to be created.

Benchmark

Wikipedia dataset

The benchmark consists in inserting all the titles from the main namespace of the Wikipedia archive into the data structure, check the used memory space after the insert (including potential memory fragmentation) and search for all the titles again in the data structure. The peak memory usage during the insert process is also measured with time(1).

Each title is associated with an int (32 bits). All the hash based structures use CityHash64 as hash function. For the tests marked with reserve, the reserve function is called beforehand to avoid any rehash.

Note that tsl::hopscotch_map, std::unordered_map, google::dense_hash_map and spp::sparse_hash_map use std::string as key which imposes a minimum size of 32 bytes (on x64) even if the key is only one character long. Other structures may be able to store one-character keys with 1 byte + 8 bytes for a pointer (on x64).

The benchmark was compiled with GCC 6.3 and ran on Debian Stretch x64 with an Intel i5-5200u and 8Go of RAM. Best of 20 runs was taken.

The code of the benchmark can be found on Gist.

Unsorted

The enwiki-20170320-all-titles-in-ns0.gz dataset is alphabetically sorted. For this benchmark, we first shuffle the dataset through shuf(1) to avoid a biased sorted dataset.

Library Data structure Peak memory (MiB) Memory (MiB) Insert (ns/key) Read (ns/key)
tsl::htrie_map HAT-trie 405.22 402.25 643.10 250.87
tsl::htrie_map
max_load_factor=4
HAT-trie 471.85 468.50 638.66 212.90
tsl::htrie_map
max_load_factor=2
HAT-trie 569.76 566.52 630.61 201.10
tsl::htrie_map
max_load_factor=1
HAT-trie 713.44 709.81 645.76 190.87
cedar::da Double-array trie 1269.68 1254.41 1102.93 557.20
cedar::da ORDERED=false Double-array trie 1269.80 1254.41 1089.78 570.13
cedar::da Double-array reduced trie 1183.07 1167.79 1076.68 645.79
cedar::da ORDERED=false Double-array reduced trie 1183.14 1167.85 1065.43 641.98
cedar::da Double-array prefix trie 498.69 496.54 1096.90 628.01
cedar::da ORDERED=false Double-array prefix trie 498.65 496.60 1048.40 628.94
hat-trie1 (C) HAT-trie 504.07 501.50 917.49 261.00
qp trie (C) QP trie 941.23 938.17 1349.25 1281.46
crit-bit trie (C) Crit-bit trie 1074.96 1071.98 2930.42 2869.74
JudySL (C) Judy array 631.09 628.37 884.29 803.58
JudyHS (C) Judy array 723.44 719.47 476.79 417.15
tsl::array_map Array hash table 823.54 678.73 603.94 138.24
tsl::array_map
with reserve
Array hash table 564.26 555.91 249.52 128.28
tsl::hopscotch_map Hash table 1325.83 1077.99 368.26 119.49
tsl::hopscotch_map
with reserve
Hash table 1080.51 1077.98 240.58 119.91
google::dense_hash_map Hash table 2319.40 1677.11 466.60 138.87
google::dense_hash_map
with reserve
Hash table 1592.51 1589.99 259.56 120.40
spp::sparse_hash_map Sparse hash table 918.67 917.10 769.00 175.59
spp::sparse_hash_map
with reserve
Sparse hash table 913.35 910.65 427.22 159.08
std::unordered_map Hash table 1249.05 1246.60 590.88 173.58
std::unordered_map
with reserve
Hash table 1212.23 1209.71 350.33 178.70
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.
Sorted

The key are inserted and read in alphabetical order.

Library Data structure Peak memory (MiB) Memory (MiB) Insert (ns/key) Read (ns/key)
tsl::htrie_map HAT-trie 396.10 393.22 255.76 68.08
tsl::htrie_map
max_load_factor=4
HAT-trie 465.02 461.80 248.88 59.23
tsl::htrie_map
max_load_factor=2
HAT-trie 543.99 541.21 230.13 53.50
tsl::htrie_map
max_load_factor=1
HAT-trie 692.29 689.70 243.84 49.22
cedar::da Double-array trie 1269.58 1254.41 278.51 54.72
cedar::da ORDERED=false Double-array trie 1269.66 1254.41 264.43 56.02
cedar::da Double-array reduced trie 1183.01 1167.78 254.60 69.18
cedar::da ORDERED=false Double-array reduced trie 1183.03 1167.78 241.45 69.67
cedar::da Double-array prefix trie 621.59 619.38 246.88 57.83
cedar::da ORDERED=false Double-array prefix trie 621.59 619.38 187.98 58.56
hat-trie2 (C) HAT-trie 521.25 518.52 503.01 86.40
qp trie (C) QP trie 940.65 937.66 392.86 190.19
crit-bit trie (C) Crit-bit trie 1074.87 1071.98 430.04 347.60
JudySL (C) Judy array 616.95 614.27 279.07 114.47
JudyHS (C) Judy array 722.29 719.47 439.66 372.25
tsl::array_map Array hash table 826.98 682.99 612.31 139.16
tsl::array_map
with reserve
Array hash table 565.37 555.35 246.55 126.32
tsl::hopscotch_map Hash table 1331.87 1078.02 375.19 118.08
tsl::hopscotch_map
with reserve
Hash table 1080.51 1077.97 238.93 117.20
google::dense_hash_map Hash table 2325.27 1683.07 483.95 137.09
google::dense_hash_map
with reserve
Hash table 1592.54 1589.99 257.22 113.71
spp::sparse_hash_map Sparse hash table 920.96 918.70 772.03 176.64
spp::sparse_hash_map
with reserve
Sparse hash table 914.84 912.47 422.85 158.73
std::unordered_map Hash table 1249.09 1246.65 594.85 173.54
std::unordered_map
with reserve
Hash table 1212.21 1209.71 347.40 176.49
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.

Dr. Askitis dataset

The benchmark consists in inserting all the words from the "Distinct Strings" dataset of Dr. Askitis into the data structure, check the used memory space and search for all the words from the "Skew String Set 1" dataset (where a string can be present multiple times) in the data structure. Note that the strings in this dataset have a quite short average and median key length (which may not be a realistic use case compared to the Wikipedia dataset used above). It's similar to the one on the cedar homepage.

  • Dataset: distinct_1 (write) / skew1_1 (read)
  • Size: 290.45 MiB / 1 029.46 MiB
  • Number of keys: 28 772 169 / 177 999 203
  • Average key length: 9.59 / 5.06
  • Median key length: 8 / 4
  • Max key length: 126 / 62

The benchmark protocol is the same as for the Wikipedia dataset.

Library Data structure Peak memory (MiB) Memory (MiB) Insert (ns/key) Read (ns/key)
tsl::htrie_map HAT-trie 604.76 601.79 485.45 77.80
tsl::htrie_map
max_load_factor=4
HAT-trie 768.10 764.98 491.78 75.48
tsl::htrie_map
max_load_factor=2
HAT-trie 1002.42 999.34 496.78 72.53
tsl::htrie_map
max_load_factor=1
HAT-trie 1344.98 1341.97 520.66 72.45
cedar::da Double-array trie 1105.45 1100.05 682.25 71.98
cedar::da ORDERED=false Double-array trie 1105.47 1100.05 668.75 71.95
cedar::da Double-array reduced trie 941.16 926.04 684.38 79.11
cedar::da ORDERED=false Double-array reduced trie 941.16 925.98 672.14 79.02
cedar::da Double-array prefix trie 714.58 712.59 831.71 75.83
cedar::da ORDERED=false Double-array prefix trie 714.66 712.31 786.93 75.89
hat-trie3 (C) HAT-trie 786.93 784.32 743.34 93.58
qp trie (C) QP trie 1800.02 1797.21 987.95 428.51
crit-bit trie (C) Crit-bit trie 2210.52 2207.64 1986.19 1109.88
JudySL (C) Judy array 1025.59 1023.11 535.02 202.36
JudyHS (C) Judy array 1002.50 999.97 456.09 148.36
tsl::array_map Array hash table 1308.08 1031.67 545.82 46.41
tsl::array_map
with reserve
Array hash table 979.44 921.363 244.19 45.74
tsl::hopscotch_map Hash table 2336.39 1611.54 288.70 47.05
tsl::hopscotch_map
with reserve
Hash table 1614.22 1611.64 220.67 46.39
google::dense_hash_map Hash table 3913.64 2636.31 317.66 43.62
google::dense_hash_map
with reserve
Hash table 2638.19 2635.68 227.58 43.09
spp::sparse_hash_map Sparse hash table 1419.69 1417.61 586.26 56.00
spp::sparse_hash_map
with reserve
Sparse hash table 1424.21 1421.69 392.76 55.73
std::unordered_map Hash table 2112.66 2110.19 554.02 105.05
std::unordered_map
with reserve
Hash table 2053.95 2051.67 309.06 109.89
  1. As the hash function can't be passed in parameter, the code of the library itself is modified to use CityHash64.

Installation

To use the library, just add the include directory to your include path. It is a header-only library.

If you use CMake, you can also use the tsl::hat_trie exported target from the CMakeLists.txt with target_link_libraries.

# Example where the hat-trie project is stored in a third-party directory
add_subdirectory(third-party/hat-trie)
target_link_libraries(your_target PRIVATE tsl::hat_trie)  

The code should work with any C++11 standard-compliant compiler and has been tested with GCC 4.8.4, Clang 3.5.0 and Visual Studio 2015.

To run the tests you will need the Boost Test library and CMake.

git clone https://github.com/Tessil/hat-trie.git
cd hat-trie/tests
mkdir build
cd build
cmake ..
cmake --build .
./tsl_hat_trie_tests

Usage

The API can be found here. If std::string_view is available, the API changes slightly and can be found here.

Example

#include <iostream>
#include <string>
#include <tsl/htrie_map.h>
#include <tsl/htrie_set.h>


int main() {
    /*
     * Map of strings to int having char as character type. 
     * There is no support for wchar_t, char16_t or char32_t yet, 
     * but UTF-8 strings will work fine.
     */
    tsl::htrie_map<char, int> map = {{"one", 1}, {"two", 2}};
    map["three"] = 3;
    map["four"] = 4;
    
    map.insert("five", 5);
    map.insert_ks("six_with_extra_chars_we_ignore", 3, 6);
    
    map.erase("two");
    
    /*
     * Due to the compression on the common prefixes, the letters of the string 
     * are not always stored contiguously. When we retrieve the key, we have to 
     * construct it.
     * 
     * To avoid a heap-allocation at each iteration (when SSO doesn't occur), 
     * we reuse the key_buffer to construct the key.
     */
    std::string key_buffer;
    for(auto it = map.begin(); it != map.end(); ++it) {
        it.key(key_buffer);
        std::cout << "{" << key_buffer << ", " << it.value() << "}" << std::endl;
    }
    
    /*
     * If you don't care about the allocation.
     */
    for(auto it = map.begin(); it != map.end(); ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    
    
    
    tsl::htrie_map<char, int> map2 = {{"apple", 1}, {"mango", 2}, {"apricot", 3},
                                      {"mandarin", 4}, {"melon", 5}, {"macadamia", 6}};
    
    // Prefix search
    auto prefix_range = map2.equal_prefix_range("ma");
    
    // {mandarin, 4} {mango, 2} {macadamia, 6}
    for(auto it = prefix_range.first; it != prefix_range.second; ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    // Find longest match prefix.
    auto longest_prefix = map2.longest_prefix("apple juice");
    if(longest_prefix != map2.end()) {
        // {apple, 1}
        std::cout << "{" << longest_prefix.key() << ", " 
                  << *longest_prefix << "}" << std::endl;
    }
    
    // Prefix erase
    map2.erase_prefix("ma");
    
    // {apricot, 3} {melon, 5} {apple, 1}
    for(auto it = map2.begin(); it != map2.end(); ++it) {
        std::cout << "{" << it.key() << ", " << *it << "}" << std::endl;
    }
    
    
    
    
    tsl::htrie_set<char> set = {"one", "two", "three"};
    set.insert({"four", "five"});
    
    // {one} {two} {five} {four} {three}
    for(auto it = set.begin(); it != set.end(); ++it) {
        it.key(key_buffer);
        std::cout << "{" << key_buffer << "}" << std::endl;
    }
} 

Serialization

The library provides an efficient way to serialize and deserialize a map or a set so that it can be saved to a file or send through the network. To do so, it requires the user to provide a function object for both serialization and deserialization.

struct serializer {
    // Must support the following types for U: std::uint64_t, float and T if a map is used.
    template<typename U>
    void operator()(const U& value);
    void operator()(const CharT* value, std::size_t value_size);
};
struct deserializer {
    // Must support the following types for U: std::uint64_t, float and T if a map is used.
    template<typename U>
    U operator()();
    void operator()(CharT* value_out, std::size_t value_size);
};

Note that the implementation leaves binary compatibility (endianness, float binary representation, size of int, ...) of the types it serializes/deserializes in the hands of the provided function objects if compatibility is required.

More details regarding the serialize and deserialize methods can be found in the API.

#include <cassert>
#include <cstdint>
#include <fstream>
#include <type_traits>
#include <tsl/htrie_map.h>


class serializer {
public:
    serializer(const char* file_name) {
        m_ostream.exceptions(m_ostream.badbit | m_ostream.failbit);
        m_ostream.open(file_name);
    }
    
    template<class T,
             typename std::enable_if<std::is_arithmetic<T>::value>::type* = nullptr>
    void operator()(const T& value) {
        m_ostream.write(reinterpret_cast<const char*>(&value), sizeof(T));
    }
    
    void operator()(const char* value, std::size_t value_size) {
        m_ostream.write(value, value_size);
    }

private:
    std::ofstream m_ostream;
};

class deserializer {
public:
    deserializer(const char* file_name) {
        m_istream.exceptions(m_istream.badbit | m_istream.failbit | m_istream.eofbit);
        m_istream.open(file_name);
    }
    
    template<class T,
             typename std::enable_if<std::is_arithmetic<T>::value>::type* = nullptr>
    T operator()() {
        T value;
        m_istream.read(reinterpret_cast<char*>(&value), sizeof(T));
        
        return value;
    }
    
    void operator()(char* value_out, std::size_t value_size) {
        m_istream.read(value_out, value_size);
    }

private:
    std::ifstream m_istream;
};


int main() {
    const tsl::htrie_map<char, std::int64_t> map = {{"one", 1}, {"two", 2}, 
                                                    {"three", 3}, {"four", 4}};
    
    
    const char* file_name = "htrie_map.data";
    {
        serializer serial(file_name);
        map.serialize(serial);
    }
    
    {
        deserializer dserial(file_name);
        auto map_deserialized = tsl::htrie_map<char, std::int64_t>::deserialize(dserial);
        
        assert(map == map_deserialized);
    }
    
    {
        deserializer dserial(file_name);
        
        /**
         * If the serialized and deserialized map are hash compatibles (see conditions in API), 
         * setting the argument to true speed-up the deserialization process as we don't have 
         * to recalculate the hash of each key. We also know how much space each bucket needs.
         */
        const bool hash_compatible = true;
        auto map_deserialized = 
            tsl::htrie_map<char, std::int64_t>::deserialize(dserial, hash_compatible);
        
        assert(map == map_deserialized);
    }
}
Serialization with Boost Serialization and compression with zlib

It's possible to use a serialization library to avoid some of the boilerplate if the types to serialize are more complex.

The following example uses Boost Serialization with the Boost zlib compression stream to reduce the size of the resulting serialized file.

#include <boost/archive/binary_iarchive.hpp>
#include <boost/archive/binary_oarchive.hpp>
#include <boost/iostreams/filter/zlib.hpp>
#include <boost/iostreams/filtering_stream.hpp>
#include <boost/serialization/split_free.hpp>
#include <boost/serialization/utility.hpp>
#include <cassert>
#include <cstdint>
#include <fstream>
#include <tsl/htrie_map.h>


template<typename Archive>
struct serializer {
    Archive& ar;
    
    template<typename T>
    void operator()(const T& val) { ar & val; }
    
    template<typename CharT>
    void operator()(const CharT* val, std::size_t val_size) {
        ar.save_binary(reinterpret_cast<const void*>(val), val_size*sizeof(CharT));
    }   
};

template<typename Archive>
struct deserializer {
    Archive& ar;
    
    template<typename T>
    T operator()() { T val; ar & val; return val; }
    
    template<typename CharT>
    void operator()(CharT* val_out, std::size_t val_size) {
        ar.load_binary(reinterpret_cast<void*>(val_out), val_size*sizeof(CharT));
    }  
};

namespace boost { namespace serialization {
template<class Archive, class CharT, class T>
void serialize(Archive & ar, tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    split_free(ar, map, version); 
}

template<class Archive, class CharT, class T>
void save(Archive & ar, const tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    serializer<Archive> serial{ar};
    map.serialize(serial);
}


template<class Archive, class CharT, class T>
void load(Archive & ar, tsl::htrie_map<CharT, T>& map, const unsigned int version) {
    deserializer<Archive> deserial{ar};
    map = tsl::htrie_map<CharT, T>::deserialize(deserial);
}
}}


int main() {
    const tsl::htrie_map<char, std::int64_t> map = {{"one", 1}, {"two", 2}, 
                                                    {"three", 3}, {"four", 4}};
    
    
    const char* file_name = "htrie_map.data";
    {
        std::ofstream ofs;
        ofs.exceptions(ofs.badbit | ofs.failbit);
        ofs.open(file_name, std::ios::binary);
        
        boost::iostreams::filtering_ostream fo;
        fo.push(boost::iostreams::zlib_compressor());
        fo.push(ofs);
        
        boost::archive::binary_oarchive oa(fo);
        
        oa << map;
    }
    
    {
        std::ifstream ifs;
        ifs.exceptions(ifs.badbit | ifs.failbit | ifs.eofbit);
        ifs.open(file_name, std::ios::binary);
        
        boost::iostreams::filtering_istream fi;
        fi.push(boost::iostreams::zlib_decompressor());
        fi.push(ifs);
        
        boost::archive::binary_iarchive ia(fi);
     
        tsl::htrie_map<char, std::int64_t> map_deserialized;   
        ia >> map_deserialized;
        
        assert(map == map_deserialized);
    }
}

License

The code is licensed under the MIT license, see the LICENSE file for details.

hat-trie's People

Contributors

ecorm avatar evanbalster avatar pragmatwice avatar tessil avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hat-trie's Issues

windows conflict with windows.h

#include
#include <windows.h>
#include
#include "tsl/htrie_map.h"
int main()
{
std::cout << "Hello World!\n";
}

tsl\array-hash\array_growth_policy.h(96,48): warning C4003: not enough arguments for function-like macro invocation 'max'
tsl\array-hash\array_growth_policy.h(96,48): error C2589: '(': illegal token on right side of '::'
tsl\array-hash\array_growth_policy.h(133): message : see reference to class template instantiation 'tsl::ah::power_of_two_growth_policy' being compiled
tsl\array-hash\array_growth_policy.h(185,61): warning C4003: not enough arguments for function-like macro invocation 'max'
tsl\array-hash\array_growth_policy.h(185,61): error C2589: '(': illegal token on right side of '::'
tsl\array-hash\array_growth_policy.h(192): message : see reference to class template instantiation 'tsl::ah::mod_growth_policy' being compiled
tsl\array-hash\array_growth_policy.h(185,61): error C2059: syntax error: '('

https://stackoverflow.com/questions/2789481/problem-calling-stdmax

maybe add macro NOMINMAX in CMakeLists.txt

Support for substring search

Great project! I'd like to use it in cquery for faster global symbol search; is it possible to add substring search, ie, the search Foo matches the string void Foo()? equal_prefix_range comes close but will not match because of the preceding void.

(Even better would be a fuzzy search, ie, vF matches void Foo(), or a custom match function, but I suspect that's out of scope).

Wildcard search support ?

Does this implementation has wildcard search support using the advantages of a trie (So that performance may increase, not needing to iterate over whole string) ?

By wildcard I mean * and ? included pattern search.

(feature request) selective prefix match

Hi,

Another request :)

The case is as follows, again for filesystem trees.
Right now a ::equal_prefix_range call gives all items that match the given prefix. For for a prefx of /home/a/Downloads it would give back the following (highlighted):
/home/a/Downloads/
/home/a/File
/home/a/Downloads/File2
/home/a/Downloads/SomeDir/
/home/a/Downloads/SomeDir/File

In other terms, basically a recursive list of files and folders for a given path.
That is fine and mostly what you need. However, in my case i only want to have the direct child items of a given prefix (same prefix as before), so the following:
/home/a/Downloads/
/home/a/File
/home/a/Downloads/File2
/home/a/Downloads/SomeDir/
/home/a/Downloads/SomeDir/File

And for a prefix of /home/a i would only want the following as a result:
/home/a/Downloads/
/home/a/File
/home/a/Downloads/File2
/home/a/Downloads/SomeDir/
/home/a/Downloads/SomeDir/File

There should not be a problem for this in terms of the tree structure behind all of this. At the moment it's simply returning too much where the end iterator would have to be set back some position for the match i want. But how to do this in a clear API way seems like a challenge.

It is in fact rather easy to do this with the current code, but a bit wasteful as well. I could for instance iterate over the results and look for the first occurrence that starts with "/", has some text before the "/" and use that point as my end iterator.

But putting something like that in the API is... nasty.

Oh well, just putting it on here, perhaps you like the idea and want to implement it :)

Best regards,
Mark

hat-trie core

Hi Tessil, when I use hat-trie for prefix matching, there is a core happens:
#0 get (this=) at /usr/local/include/c++/6.3.0/bits/unique_ptr.h:308
308 { return std::get<0>(_M_t); }
(gdb) bt
#0 get (this=) at /usr/local/include/c++/6.3.0/bits/unique_ptr.h:308
#1 operator bool (this=) at /usr/local/include/c++/6.3.0/bits/unique_ptr.h:322
#2 operator!=<tsl::detail_htrie_hash::htrie_hash<char, long unsigned int, str_hash, short unsigned int>::anode, std::default_delete<tsl::detail_htrie_hash::htrie_hash<char, long unsigned int, str_hash, short unsigned int>::anode> > (__x=...) at /usr/local/include/c++/6.3.0/bits/unique_ptr.h:674
#3 first_child (this=0x0) at ./comsearch-search/websearch/svr/suggest_server/com-sugdirect/suggest/hat-trie/htrie_hash.h:338
#4 tsl::detail_htrie_hash::htrie_hash<char, unsigned long, str_hash, unsigned short>::htrie_hash_iterator<true, true>::operator++ (this=0x7f5a757f8480,
this@entry=<error reading variable: Cannot access memory at address 0x7f5a757f8338>)
at ./comsearch-search/websearch/svr/suggest_server/com-sugdirect/suggest/hat-trie/htrie_hash.h:664

I don't know how to fix it. looking forward to your reply

Feature question / suggestion: Insertion via iterators

Hi, currently all insert() / emplace() functions take const CharT* key, size_type key_size parameters.

I have a use-case to allow inserting via iterators instead (I need to insert a std::string / std::string_view backwards) and have started to add these functions. However, it's quickly snowballing so before I continue I'm wondering if this is something you have tried yourself? If it's ultimately not possible I'd like to know! If it's worth pursuing, and contributing back to you, it will most likely end up with the engine of the library being written to use iterators and supplying std::string / std::string_view, {const Char*T, size_type} wrappers over these new functions. What do you think about this? Would you accept a PR for such a large refactor?

Deserialize failed when the amount of key is larger

I tested the serialize and deserialize example with more key-values (no boost version), but it deserialize failed.
const tsl::htrie_map<char, std::int64_t> map = {{"one", 1}, {"two", 2}, {"three", 3}, {"four", 4}};

const tsl::htrie_map<char, std::int64_t> map = { 
        {"one", 1}, 
        {"two", 2},
        {"three", 3}, 
        {"three234", 3},
        {"th324ree", 3},
        {"thr534543ee", 3},
        {"th56ree", 3},
        {"th54ree", 3},
        {"th45676ree", 3},
        {"th6ree", 3},
        {"th56ree", 3},
        {"three", 3},
        {"th4ghj5ree", 3},
        {"th678ree", 3},
        {"thj6h7ree", 3},
        {"thkire89e", 3},
        {"thghryuee", 3},
        {"thjrgyee", 3},
        {"thgjghjhree", 3},
        {"thhgjghhjhree", 3},
        {"thhgjree", 3},
        {"thhgjree", 3},
        {"thrjee", 3},
        {"tjhree", 3},
        {"tghhree", 3},
        {"thghjree", 3},
        {"tghjjghjhree", 3},
        {"thhjrghjee", 3},
        {"thrghjee", 3},
        {"thrghjee", 3},
        {"thrjghjee", 3},
        {"thhgree", 3},
        {"thhgree", 3},
        {"thjkree", 3},
        {"thklhree", 3},
        {"thpqweoip;oiree", 3},
        {"thasderee", 3},
        {"thr4w3ewqaqeree", 3},
        {"thewrsadree", 3},
        {"four", 4} 
};  // more than 30 key-values

trie node has no value node and no child

According to the comment in code:

A trie node should at least have one child, except if it has a value node then no child is a possibility.

I add following assert to iterator htrie_hash::erase(iterator pos):

if(pos.m_read_trie_node_value) {
    tsl_assert(pos.m_current_trie_node != nullptr && pos.m_current_trie_node->val_node() != nullptr);
    tsl_assert(pos.m_current_trie_node->nb_children() > 0); // here
    pos.m_current_trie_node->val_node().reset(nullptr);
    m_nb_elements--;        
    return next_pos;
}

And write a test case:

BOOST_AUTO_TEST_CASE(stest_empty_trie_node) {
	tsl::htrie_set<char> set = { "k1", "k2", "k3", "k4" };
	set.burst_threshold(4);
	set.insert("k");
	set.erase("k1");
	set.erase("k2");
	set.erase("k3");
	set.erase("k4");
	set.erase("k");
}

It cause the assert fail. Is it a bug?

By the way, code coverage testing will make unit tests more complete.
Thanks for your implement.

Add check for _MSVC_LANG in addition to __cplusplus

Hello โ€”

It's all a bit daft, but the Microsoft Visual Studio compilers report __cplusplus as 199711L by default, even when using modern C++ standards. This behavior can be remedied using the /Zc:__cplusplus switch, or by detecting the _MSVC_LANG macro (whose value will be greater than or equal to __cplusplus and more representative of the standard used).

https://docs.microsoft.com/en-us/cpp/build/reference/zc-cplusplus?view=msvc-160

This would allow for use of string_view in a "blank" MSVC project.

https://github.com/Tessil/hat-trie/blob/master/include/tsl/htrie_hash.h

Feature suggestion: obtaining a path to the longest prefix

It would be quite useful if it was possible not just get an iterator to the longest prefix, but also a path leading to that prefix from the shortest prefix. Such a method would return a vector of iterators pointing to values that matched the first characters of the longest prefix argument. In other words, if a map contained these values:

/A
/A/B/C
/A/B/D
/A/B/E

, and the longest prefix would be requested as /A/B/D/E/F, then in a loop similar to the ones in equal_prefix_range_impl and filter_prefix it would test if keys of nodes that have values would match the beginning of the prefix string, so in the example above, it would return a vector of iterators pointing to /A and /A/B/D.

It would allow many useful operations for parent paths, which now can only be done via multiple calls with prefixes of different lengths.

A special "parent iterator" hiding the parent path vector and overloading -- would probably fit nicely in the interface.

Sorry if there is a way to achieve what I described and I missed it.

hat-trie core

Ho Tessil, I have a core problem when I use hat-trie. This is the core information:
get (this=) at /usr/local/include/c++/6.3.0/bits/unique_ptr.h:308

It happened in the array_hash.h file at line 880.

Looking forward to your reply.

Thanks

How to serialize/deserialize a map?

Thanks for the excellent implementation. I'm thinking of using the hat-trie in my project where I need to serialize the built map and later deserialize it. I'm working with tens of millions of strings, so would prefer to deserialize from disk than rebuild the map each time from raw data. What's a good way to serialize/deserialize to/from disk?

Question: efficient set intersect of multiple HAT-tries

This is more a question, than a real issue.

I am wondering, what is the most efficient way to intersect the keys of two or more hat-tries.

My naive approach would be to iterate over the keys of the HAT-trie with the smallest size and probe the others for the key. e.g.:

std::set<std::string> intersect(std::vector<tsl::htrie_set<char>> operands) {
    // Argument shouldn't be copied or changed. Only for demonstration purposes.
    sort(operands.begin(), operands.end(), 
        [&](const auto & a, const auto & b) { 
            return a.size() < b.size(); 
    });
    auto iterated_operand = operands[0];
    operands.erase(0);
    std::set<std::string> result{};
    for(auto it = iterated_operand.begin(); it != iterated_operand.end(); ++it) {
        bool skip = false;
        for(const auto &probe_operand : operands)
            if (not probe_operand.count(it.key())){
                skip = true;
                break;
            }
        if (not skip) 
            result.insert(it.key());
    }
    return result;
}

iterator size

Hello,

I just found your implementation of a HAT-tree map and already love it. I use strings as keys and structs as values, and it is faster than a std::map / std::unordered_map for my use case and more memory efficient. However, I noticed that the iterators are a lot bigger than that of a std::map: 72 bytes compared to just 8 bytes for an iterator of a std::map. My software needs to remember the locations of a ton of items, so the bigger the size of a single iterator, the more memory is consumed. With a std::map, I can instruct my software to remember the items using a vector of iterators. I can easily retrieve the keys using i->first and the values using i->second. When I use htrie_map, the vector of iterators grows substantially, because the individual iterators are so big. Is there a more memory efficient way to store a list of locations, which lets me retrieve both the key and the value? Are there more lightweight iterators? Or pointers to items of the htrie_map from which key and value can be reconstructed? Simple pointers to the values are not an option, because I also need the key.

Many thanks in advance,
Sebastian

Case insensitive prefix search, data is case sensitive.

Hi,

I admit, i haven't tried it. But looking at the code it seems to be doing case-sensitive searching.
Imagine i put a bunch of files paths in the trie with structures like:

/home/a/Desktop
/home/a/Downloads
/home/a/Pictures

I'm guessing a prefix search of:

tsl::htrie_set<char> set = {"/home/a/Desktop", "/home/a/Downloads", "/home/a/Pictures"};
set.equal_prefix_range("/home/a/d");

note the lowercase d in the equal_prefix_range call, it will probably not match "/home/a/Desktop" and "/home/a/Downloads" while that would be desirable. Just lowercase everything would fix this, but then the data does not represent the actual paths anymore (both pats could exists in a linux/unix world as it is case sensitive). Using a map and have a lowercase -> uppercase mapping could potentially be a solution as well, but that also doubles the data usage (at the very least) thus kinda defeating the point of using a nice HAT trie in the first place (in terms of memory efficiency).

I do realize that there is a bit of trouble in making the equal_prefix_range case-insensitive (ideally optional, case-sensitive and case-insensitive). You don't know the type i'm putting in as string. It might be a std::string, might be a std::string_view, a byte array, a QString.. You just don't know therefore can't expect a call on a character (like toLowercase() for example) to exist.

So i have a bit of a request. Could you add a function that allows me to set how to compare a prefix?

I'm thinking of a:

tsl::htrie_set::set_character_compare(...);

Then in the compare functions you use whatever is set by the user.

Again, i could be completely wrong and it's already possible, but it doesn't look like that from reading the code ;)

I'm curious about your opinion!

Best regards,
Mark

Retrieving sorted keys

Hi,

I'm using the tsl::htrie_map for inserting DNA fixed-size strings as keys (4-bases ACGT).
Is there any option to iterate over sorted keys? I didn't find anything related to that in the documentations.

Thanks.

Question: Visiting each key during mutations

Not an issue but a question. Oh and thanks for releasing a really excellent project. Great code!

Ideally I'd like to iterate through the entire trie, say a few hundred keys at a time. However iterators are invalidated after a mutation, so I was thinking of just recording the last few keys one visited then pick up where you left off. What do you think? You might get unlucky and get your key deleted so I was thinking of keeping a random set from the last iteration. It doesn't matter for me if you visit a key twice.

Any other ideas?

cmake --build . <- failed with boost 1.70

cmake ..
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Boost: /usr/local/include (found suitable version "1.70.0", minimum required is "1.54.0") found components: unit_test_framework
-- Configuring done
-- Generating done
-- Build files have been written to:

Scanning dependenccmake ..
-- The C compiler identification is GNU 9.3.0
-- The CXX compiler identification is GNU 9.3.0
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Detecting C compile features
-- Detecting C compile features - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Boost: /usr/local/include (found suitable version "1.70.0", minimum required is "1.54.0") found components: unit_test_framework
-- Configuring done
-- Generating done
ies of target tsl_hat_trie_tests
[ 25%] Building CXX object CMakeFiles/tsl_hat_trie_tests.dir/main.cpp.o
[ 50%] Building CXX object CMakeFiles/tsl_hat_trie_tests.dir/trie_map_tests.cpp.o
/trie_map_tests.cpp: In member function 'void test_htrie_map::test_empty_map::test_method()':
/trie_map_tests.cpp:739:22: error: '((void)& last +65)' may be used uninitialized in this function [-Werror=maybe-uninitialized]
739 | BOOST_AUTO_TEST_CASE(test_empty_map) {
| ^~~~~~~~~~~~~~
cc1plus: all warnings being treated as errors
make[2]: *** [CMakeFiles/tsl_hat_trie_tests.dir/build.make:76: CMakeFiles/tsl_hat_trie_tests.dir/trie_map_tests.cpp.o] Error 1
make[1]: *** [CMakeFiles/Makefile2:96: CMakeFiles/tsl_hat_trie_tests.dir/all] Error 2
make: *** [Makefile:84: all] Error 2

boolean value is not supported

when creating a tsl::htrie_map<char, bool> and trying to insert or emplace an element i'm getting the following error:

array_hash.h:870:14: error: non-const lvalue reference to type 'bool' cannot bind to a temporary of type
      'std::__1::vector<bool, std::__1::allocator<bool> >::reference' (aka '__bit_reference<std::__1::vector<bool, std::__1::allocator<bool> > >')
      return this->m_array_hash->m_values[value_position()];
             ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

probably this is due to the difference of implementation of a bitvector, see std::vector<bool>

Users should use std::hash by default

Hi Thibaut,

The recent implement of std::hash use CityHash for 64 bit platforms and will fallback to Murmur hash for 32 bit platforms. Could you update README.md to reflect the new change in std::hash?

IMHO users should use std::hash by default and try a better hash function if needed. From these benchmark results (https://github.com/hungptit/clhash/blob/convert.to.header.only.library/benchmark/results.svg) we can see that std::hash is pretty fast i.e about 0.5 CPU cycle per byte.

BTW, thank you very much for your work I have started to replace std::hash with hopscotch_map/set in all of my projects.

Regards,
Hung

Segmentation fault when map is iterated after erase prefix

Here's a minimally reproducible example that causes a crash:

#include <iostream>
#include <tsl/htrie_map.h>

int main() {
    tsl::htrie_map<char, std::string> data_map;
    std::cout << "Inserting items into map..." << std::endl;
    data_map.emplace("data", "foo");

    for(size_t i = 0; i < 30000; i++) {
        std::string key = "data." + std::to_string(i);
        data_map.insert(key, "foo");
    }

    size_t count = 0;

    std::cout << "Erase prefix..." << std::endl;
    data_map.erase_prefix("data.");

    std::cout << "Start iteration..." << std::endl;

    for(auto it = data_map.begin(); it != data_map.end(); it++) {
        count++;
    }

    std::cout << "End iteration, count: " << count << std::endl;
    return 0;
}

VS natvis visualizer

Hi!

It would be great to have a visualizer for VS.
Visualizers are very useful. I failed to write by my own, the structure is rather complicated.

(feature request) Insert at prefix position

Hi,

Imagine the case of inserting a whole filesystem structure in the hat-trie. You have folders with a bunch of files, more folders and more files. At the moment in the current code each entry is looking up the appropriate point to insert the entry. For example a folder structure like:

/home/a/Downloads
/home/a/Documents

the insertion logic has to iterate the tree to find the correct spot. That is fine for the first entry, but could be optimized for the second entry and specially for all entries after the second.

I'd like to propose a insert function with a "fixed prefix" that can then take a list (vector or map, depending on the type of container you used for the hat-trie) to insert a bunch of elements at a given prefix. For instance something like this:

tsl::htrie_set::insert("/home/a/", {"Downloads", "Documents"})

This in only really beneficial for filesystem like structures, not for wordbook purposes. But it would still be a neat addition imho.

Best regards,
Mark

Issue on serialise/deserialise a complex map?

Hi,

I am trying to use the hat_trie in my project.
But I am facing an issue on serializing the following:

tsl::htrie_map<char,set<pair<ll,string>>> data[N];

Here, it's a multidimensional map trying to store char as key and set of pair of integers and strings as a value to it.
Can you help me out?

couldnt serialize/ deserialize tsl::htrie_map<char, vector<int>>

I tried inserting few entires in a htrie_map,
tsl::htrie_map<char , std::vector> map;

But then when i tried to serialize it using this,
serializer serial("file_name");
map.serialize(serial);
i couldnt serialize. It throws some error.
It would be great if you could help me with this.

EXC_BAD_ACCESS

Hi,
in my application, a crash caused probably by the trie appeared. CLion's debugger shows the following trace beggining in equal_prefix_range method:

image

After veeeeery long recursion (btw, is the trie implemented using recursion?!), EXC_BAD_ACCESS raised:

image

For my data, the error appears always. Is it possible that it is a bug inside the trie?
Thank you

Marek

Is load factor 16 or greater possible?

  1. is load factor 16 or greater possible?
  2. What's the maximum load factor possible? what does load factor mean?
  3. Possible to have mem usage lower than the actual size of the data?

[feature request] find longest prefix

I am missing a common feature of Trie APIs: searching for the longest string in the trie which is a prefix of input.

Example in pseudo code:

hattriemap = {"a", "as", "asdf"}
hattriemap.longest_prefix("asd") -> "as"

Is this possible with hat-tries? If yes, it would be a great addition to your already great library.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.