Coder Social home page Coder Social logo

Comments (5)

asad-awadia avatar asad-awadia commented on June 2, 2024

@ajkr

from rocksdb.

asad-awadia avatar asad-awadia commented on June 2, 2024
corrupted size vs. prev_size
Aborted

different errors depending on what i set the block cache size to be

from rocksdb.

asad-awadia avatar asad-awadia commented on June 2, 2024
Stack: [0x00007f4c70c5e000,0x00007f4c70d5e000],  sp=0x00007f4c70d59960,  free space=1006k
Native frames: (J=compiled Java code, A=aot compiled Java code, j=interpreted, Vv=VM code, C=native code)
C  [librocksdbjni11698117139087703162.so+0x5b555a]  rocksdb::Configurable::ValidateOptions(rocksdb::DBOptions const&, rocksdb::ColumnFamilyOptions const&) const+0x68a
C  [librocksdbjni11698117139087703162.so+0x5cd733]  rocksdb::OptionTypeInfo::Validate(rocksdb::DBOptions const&, rocksdb::ColumnFamilyOptions const&, std::string const&, void const*) const+0xd3
C  [librocksdbjni11698117139087703162.so+0x5b519e]  rocksdb::Configurable::ValidateOptions(rocksdb::DBOptions const&, rocksdb::ColumnFamilyOptions const&) const+0x2ce
C  [librocksdbjni11698117139087703162.so+0x5cd366]  rocksdb::ValidateOptions(rocksdb::DBOptions const&, rocksdb::ColumnFamilyOptions const&)+0x126
C  [librocksdbjni11698117139087703162.so+0x3eb47b]  rocksdb::DBImpl::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**, bool, bool)+0x10b
C  [librocksdbjni11698117139087703162.so+0x3ed4c5]  rocksdb::DB::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::DB**)+0x15
C  [librocksdbjni11698117139087703162.so+0x7caac7]  rocksdb::OptimisticTransactionDB::Open(rocksdb::DBOptions const&, rocksdb::OptimisticTransactionDBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::OptimisticTransactionDB**)+0xa77
C  [librocksdbjni11698117139087703162.so+0x7cb971]  rocksdb::OptimisticTransactionDB::Open(rocksdb::DBOptions const&, std::string const&, std::vector<rocksdb::ColumnFamilyDescriptor, std::allocator<rocksdb::ColumnFamilyDescriptor> > const&, std::vector<rocksdb::ColumnFamilyHandle*, std::allocator<rocksdb::ColumnFamilyHandle*> >*, rocksdb::OptimisticTransactionDB**)+0x31

from rocksdb.

ajkr avatar ajkr commented on June 2, 2024

I tried it on Ubuntu 22.04

$ uname -a
Linux ip-172-31-22-210 6.2.0-1018-aws #18~22.04.1-Ubuntu SMP Wed Jan 10 22:54:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Using rocksdbjni 8.11.3 from Maven central:

$ cat pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0
                             http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.example</groupId>
    <artifactId>RocksDBExample</artifactId>
    <version>1.0-SNAPSHOT</version>

    <dependencies>
        <dependency>
            <groupId>org.rocksdb</groupId>
            <artifactId>rocksdbjni</artifactId>
            <version>8.11.3</version>
        </dependency>
    </dependencies>
    <properties>
         <maven.compiler.source>1.8</maven.compiler.source>
         <maven.compiler.target>1.8</maven.compiler.target>
    </properties>
</project>

My program was just to open a DB using (roughly) your BlockBasedTableConfig:

$ cat src/main/java/RocksDBExample.java
import org.rocksdb.BlockBasedTableConfig;
import org.rocksdb.BloomFilter;
import org.rocksdb.Options;
import org.rocksdb.RocksDB;
import org.rocksdb.RocksDBException;

public class RocksDBExample {

    public static void main(String[] args) {
        RocksDB.loadLibrary();
        Options options = new Options().setCreateIfMissing(true);
        options.setTableFormatConfig(getIdxBlockBasedTableConfig());
        try (RocksDB rocksDB = RocksDB.open(options, "/home/ubuntu/db")) {
        } catch (RocksDBException e) {
            e.printStackTrace();
        }
    }

    private static BlockBasedTableConfig getIdxBlockBasedTableConfig() {
        BlockBasedTableConfig blockBasedTableConfig = new BlockBasedTableConfig();
        blockBasedTableConfig.setIndexType(org.rocksdb.IndexType.kTwoLevelIndexSearch);
        // Enable bloom filter
        blockBasedTableConfig.setFilter(new BloomFilter(10, false));
        blockBasedTableConfig.setPartitionFilters(true);
        blockBasedTableConfig.setMetadataBlockSize(8192);
        // Set below as true to avoid OOM
        blockBasedTableConfig.setCacheIndexAndFilterBlocks(true);
        blockBasedTableConfig.setPinTopLevelIndexAndFilter(true);
        blockBasedTableConfig.setCacheIndexAndFilterBlocksWithHighPriority(true);
        blockBasedTableConfig.setPinL0FilterAndIndexBlocksInCache(true);
        blockBasedTableConfig.setWholeKeyFiltering(true);
        blockBasedTableConfig.setBlockCacheSize(104857600);
        long blockSizeBytes = 4096;
        blockBasedTableConfig.setBlockSize(blockSizeBytes);
        return blockBasedTableConfig;
    }
}

I built and ran it like this:

$ mvn compile && mvn exec:java -Dexec.mainClass="RocksDBExample"

It succeeded, and I checked the block-based table options in the OPTIONS file in /home/ubuntu/db were right.

from rocksdb.

asad-awadia avatar asad-awadia commented on June 2, 2024

@ajkr yes it works for me too :)

I am doing a migration from 5 old column families to 5 new ones - the comparators are different

When I only have the 5 old one - it works
When I only have the 5 new ones - it works

but when I am doing an open with all 10 - that is when trying to do the migration - this crash happens on ubuntu only everytime

That's why I was hoping you could have some insights as to what could be the problem - is my block cache too small? is my c++ code wrong? Is there an issue with having multiple column families with these options

The code being tested here doesn't even reach the migration code - it just dies directly on the open call

surprisingly I can run the migration code [and the open with all 10 CFs] in an alpine container on k8s - in our dev environments without this crash

but during our unit test runs on ubuntu [non alpine] it crashes

from rocksdb.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.