Coder Social home page Coder Social logo

mongo-rocks's Introduction

RocksDB Storage Engine Module for MongoDB

Stable Versions/Branches

  • v3.2
  • v3.4
  • v4.0.3
  • v4.2.5

How to build

See BUILD.md

More information

To use this module, it has to be linked from mongo/src/mongo/db/modules. The build system will automatically recognize it. In the mongo repository directory do the following:

mkdir -p src/mongo/db/modules/
ln -sf ~/mongo-rocks src/mongo/db/modules/rocks

To build you will need to first install the RocksDB library, see INSTALL.md at https://github.com/facebook/rocksdb for more information. If you install in non-standard locations, you may need to set CPPPATH and LIBPATH environment variables:

CPPPATH=/myrocksdb/include LIBPATH=/myrocksdb/lib scons

Reach out

If you have any issues with MongoRocks, leave an issue on github's issue board.

mongo-rocks's People

Contributors

acmorrow avatar agfeldman avatar alabid avatar alexkleiman avatar amidvidy avatar benety avatar danglifei-006 avatar daveh86 avatar denis-protivenskii avatar dpercy avatar dstorch avatar eharry avatar erh avatar geertbosch avatar igorcanadi avatar igorsol avatar islamabdelrahman avatar kaloianm avatar kangas avatar krkos avatar milkie avatar mm304321141 avatar redbeard0531 avatar siying avatar sunlike-lipo avatar tylerbrock avatar visemet avatar wolfkdy avatar yeliang1006 avatar yuanxiaolai avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mongo-rocks's Issues

error while building mongo with mongo-rocks

Hi,

I am running the following script on debian stretch

MONGO_VERSION=3.6.2
ROCKSDB_VERSION=5.9.2
# misc
echo "deb http://ftp.us.debian.org/debian unstable main contrib non-free" > /etc/apt/sources.list.d/unstable.list \
&&  apt-get update \
&&  apt-get install -y \
    build-essential \
    git \
    binutils \
    python \
    scons \
    libssl-dev \
    gcc-5 \
    libbz2-dev \
    libsnappy-dev \
    zlib1g-dev \
    wget \
    git \
    binutils
# RocksDB
git clone https://github.com/facebook/rocksdb.git
cd rocksdb &&  git checkout tags/v$ROCKSDB_VERSION \
&&  USE_RTTI=1 CFLAGS="-fPIC" CXXFLAGS="-flto -Os -s" make -j$(nproc) static_lib \
&&  make install
# MongoDB
git clone https://github.com/mongodb-partners/mongo-rocks.git /mongo-rocks \
&&  cd /mongo-rocks && git checkout tags/r$MONGO_VERSION
git clone https://github.com/mongodb/mongo.git /mongo
apt-get install -y python-dev
wget https://bootstrap.pypa.io/get-pip.py \
python get-pip.py
cd /mongo && git checkout tags/r$MONGO_VERSION \
&&  mkdir -p src/mongo/db/modules/ \
&&  ln -sf /mongo-rocks src/mongo/db/modules/rocks \
&&  pip install -r buildscripts/requirements.txt \
&&  CXXFLAGS="-flto -Os -s" scons CPPPATH=/usr/local/include LIBPATH=/usr/local/lib -j$(nproc) --disable-warnings-as-errors --release --prefix=/usr --opt core --ssl install
strip /usr/bin/mongoperf \
&&  strip /usr/bin/mongo \
&&  strip /usr/bin/mongod \
&&  strip /usr/bin/mongos
# mongotools
GO_VERSION=1.9.1
wget https://storage.googleapis.com/golang/go$GO_VERSION.linux-amd64.tar.gz -P /usr/local \
&&  tar -C /usr/local -xzf /usr/local/go$GO_VERSION.linux-amd64.tar.gz \
&&  export PATH=$PATH:/usr/local/go/bin
git clone https://github.com/mongodb/mongo-tools /mongo-tools \
&&  cd /mongo-tools && git checkout tags/r$MONGO_VERSION
TOOLS_PKG='github.com/mongodb/mongo-tools'
rm -rf .gopath/ \
&&  mkdir -p .gopath/src/"$(dirname "${TOOLS_PKG}")" \
&&  ln -sf `pwd` .gopath/src/$TOOLS_PKG \
&&  export GOPATH=`pwd`/.gopath:`pwd`/vendor
go build -o /usr/bin/bsondump bsondump/main/bsondump.go \
&&  go build -o /usr/bin/mongoimport mongoimport/main/mongoimport.go \
&&  go build -o /usr/bin/mongoexport mongoexport/main/mongoexport.go \
&&  go build -o /usr/bin/mongodump mongodump/main/mongodump.go \
&&  go build -o /usr/bin/mongorestore mongorestore/main/mongorestore.go \
&&  go build -o /usr/bin/mongostat mongostat/main/mongostat.go \
&&  go build -o /usr/bin/mongofiles mongofiles/main/mongofiles.go \
&&  go build -o /usr/bin/mongooplog mongooplog/main/mongooplog.go \
&&  go build -o /usr/bin/mongotop mongotop/main/mongotop.go
strip /usr/bin/bsondump \
&&  strip /usr/bin/mongoimport \
&&  strip /usr/bin/mongoexport \
&&  strip /usr/bin/mongodump \
&&  strip /usr/bin/mongorestore \
&&  strip /usr/bin/mongostat \
&&  strip /usr/bin/mongofiles \
&&  strip /usr/bin/mongooplog \
&&  strip /usr/bin/mongotop

resulting in the following error on CXXFLAGS="-flto -Os -s" scons CPPPATH=/usr/local/include LIBPATH=/usr/local/lib -j$(nproc) --disable-warnings-as-errors --release --prefix=/usr --opt core --ssl install with the following trace:

09:04
g++ -o build/opt/third_party/mozjs-45/platform/x86_64/linux/build/Unified_cpp_js_src34.o -c -Woverloaded-virtual -Wno-maybe-uninitialized -std=c++14 -Wno-non-virtual-dtor -fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-const-variable -Wno-unused-but-set-variable -Wno-missing-braces -fstack-protector-strong -fno-builtin-memcmp -include js-confdefs.h -Wno-invalid-offsetof -fPIE -DIMPL_MFBT -DJS_USE_CUSTOM_ALLOCATOR -DSTATIC_JS_API=1 -DU_NO_DEFAULT_INCLUDE_UTF_HEADERS=1 -DPCRE_STATIC -DNDEBUG -D_FORTIFY_SOURCE=2 -DBOOST_SYSTEM_NO_DEPRECATED -DBOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS -Ibuild/opt/third_party/mozjs-45/extract/js/src -Isrc/third_party/mozjs-45/extract/js/src -Ibuild/opt/third_party/mozjs-45/extract/mfbt -Isrc/third_party/mozjs-45/extract/mfbt -Ibuild/opt/third_party/mozjs-45/extract/intl/icu/source/common -Isrc/third_party/mozjs-45/extract/intl/icu/source/common -Ibuild/opt/third_party/mozjs-45/include -Isrc/third_party/mozjs-45/include -Ibuild/opt/third_party/mozjs-45/mongo_sources -Isrc/third_party/mozjs-45/mongo_sources -Ibuild/opt/third_party/mozjs-45/platform/x86_64/linux/build -Isrc/third_party/mozjs-45/platform/x86_64/linux/build -Ibuild/opt/third_party/mozjs-45/platform/x86_64/linux/include -Isrc/third_party/mozjs-45/platform/x86_64/linux/include -Isrc/third_party/zlib-1.2.8 -I/usr/local/include src/third_party/mozjs-45/platform/x86_64/linux/build/Unified_cpp_js_src34.cpp
In file included from src/mongo/db/modules/rocks/src/rocks_engine.h:52:0,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:63:37: error: 'SnapshotName' does not name a type; did you mean 'Snapshotted'?
     void setCommittedSnapshot(const SnapshotName& name, Timestamp ts) final;
                                     ^~~~~~~~~~~~
                                     Snapshotted
src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:63:10: error: 'void mongo::RocksSnapshotManager::setCommittedSnapshot(const int&, mongo::Timestamp)' marked 'final', but is not virtual
     void setCommittedSnapshot(const SnapshotName& name, Timestamp ts) final;
          ^~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:34:0,
                 from src/mongo/db/modules/rocks/src/rocks_engine.h:52,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/storage/snapshot_manager.h:72:18: warning: 'virtual void mongo::SnapshotManager::setCommittedSnapshot(const mongo::Timestamp&)' was hidden [-Woverloaded-virtual]
     virtual void setCommittedSnapshot(const Timestamp& timestamp) = 0;
                  ^~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_engine.h:52:0,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:63:10: warning:   by 'void mongo::RocksSnapshotManager::setCommittedSnapshot(const int&, mongo::Timestamp)' [-Woverloaded-virtual]
     void setCommittedSnapshot(const SnapshotName& name, Timestamp ts) final;
          ^~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:0:
src/mongo/db/modules/rocks/src/rocks_engine.h:205:30: error: cannot declare field 'mongo::RocksEngine::_snapshotManager' to be of abstract type 'mongo::RocksSnapshotManager'
         RocksSnapshotManager _snapshotManager;
                              ^~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_engine.h:52:0,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:44:7: note:   because the following virtual functions are pure within 'mongo::RocksSnapshotManager':
 class RocksSnapshotManager final : public SnapshotManager {
       ^~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:34:0,
                 from src/mongo/db/modules/rocks/src/rocks_engine.h:52,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/storage/snapshot_manager.h:72:18: note:     virtual void mongo::SnapshotManager::setCommittedSnapshot(const mongo::Timestamp&)
     virtual void setCommittedSnapshot(const Timestamp& timestamp) = 0;
                  ^~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/modules/rocks/src/rocks_engine.cpp:72:0:
src/mongo/db/modules/rocks/src/rocks_recovery_unit.h:101:25: error: 'SnapshotName' was not declared in this scope
         boost::optional<SnapshotName> getMajorityCommittedSnapshot() const final;
                         ^~~~~~~~~~~~
src/mongo/db/modules/rocks/src/rocks_recovery_unit.h:101:25: note: suggested alternative: 'Snapshotted'
         boost::optional<SnapshotName> getMajorityCommittedSnapshot() const final;
                         ^~~~~~~~~~~~
                         Snapshotted
src/mongo/db/modules/rocks/src/rocks_recovery_unit.h:101:37: error: template argument 1 is invalid
         boost::optional<SnapshotName> getMajorityCommittedSnapshot() const final;
                                     ^
src/mongo/db/modules/rocks/src/rocks_recovery_unit.h:101:39: error: conflicting return type specified for 'virtual int mongo::RocksRecoveryUnit::getMajorityCommittedSnapshot() const'
         boost::optional<SnapshotName> getMajorityCommittedSnapshot() const final;
                                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
In file included from src/mongo/db/operation_context.h:39:0,
                 from src/mongo/db/storage/snapshot_manager.h:36,
                 from src/mongo/db/modules/rocks/src/rocks_snapshot_manager.h:34,
                 from src/mongo/db/modules/rocks/src/rocks_engine.h:52,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:34:
src/mongo/db/storage/recovery_unit.h:129:40: error:   overriding 'virtual boost::optional<mongo::Timestamp> mongo::RecoveryUnit::getMajorityCommittedSnapshot() const'
     virtual boost::optional<Timestamp> getMajorityCommittedSnapshot() const {
                                        ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
scons: *** [build/opt/mongo/db/modules/rocks/src/rocks_engine.o] Error 1
scons: building terminated because of errors.

Do you have any idea what I am missing here?

mismatch between MongoDB and RocksDB

I just tried to build from the two latest GitHub source trees. The build instructions are a little out of date, though worked mostly.

Here are some "hacks" to make it work out of the box.

=====
diff --git a/SConscript b/SConscript
index 0cc053c..578a7f0 100644
--- a/SConscript
+++ b/SConscript
@@ -40,6 +40,8 @@ env.Library(
],
SYSLIBDEPS=["rocksdb",
"z",

  •            "snappy",
    
  •            "lz4",
               "bz2"] #z and bz2 are dependencies for rocks
              + dynamic_syslibdeps
    

    )
    diff --git a/src/rocks_engine.cpp b/src/rocks_engine.cpp
    index 2bfbac1..f1bbff9 100644
    --- a/src/rocks_engine.cpp
    +++ b/src/rocks_engine.cpp
    @@ -270,7 +270,7 @@ namespace mongo {
    uint32_t identPrefix = static_cast<uint32_t>(element.numberInt());

               _identMap[StringData(ident.data(), ident.size())] =
    
  •                std::move(identConfig.getOwned());
    
  •                identConfig.getOwned();
    
               _maxPrefix = std::max(_maxPrefix, identPrefix);
           }
    

@@ -566,7 +566,7 @@ namespace mongo {
prefix = ++_maxPrefix;
configBuilder->append("prefix", static_cast<int32_t>(prefix));

  •        config = std::move(configBuilder->obj());
    
  •        config = configBuilder->obj();
           _identMap[ident] = config.copy();
       }
    

MongoDB Insert slow down after drop big collection.

I am testing just insert performance(There's only insert) of MongoDB + RocksDB.
Sometimes, after dropping big collection (500million document collection generated by sysbench mongodb test tool), Insert performance is dropped drastically and periodically.

This is insert/sec throughput from mongostat.

insert	query	update	delete	getmore	command
33130	*0	*0	*0	0	37|0	6
26741	*0	*0	*0	0	30|0	6
33356	*0	*0	*0	0	37|0	6
32758	*0	*0	*0	0	36|0	6
34051	*0	*0	*0	0	37|0	6
33034	*0	*0	*0	0	38|0	6
33856	*0	*0	*0	0	37|0	6
34155	*0	*0	*0	0	39|0	6
32910	*0	*0	*0	0	37|0	6
12993	*0	*0	*0	0	15|0	6
126	*0	*0	*0	0	1|0	6
128	*0	*0	*0	0	4|0	6
151	*0	*0	*0	0	5|0	6
94	*0	*0	*0	0	2|0	6
590	*0	*0	*0	0	11|0	6
367	*0	*0	*0	0	5|0	6
970	*0	*0	*0	0	6|0	6
369	*0	*0	*0	0	3|0	6
225	*0	*0	*0	0	2|0	6
380	*0	*0	*0	0	11|0	6
467	*0	*0	*0	0	10|0	6
513	*0	*0	*0	0	4|0	6
247	*0	*0	*0	0	4|0	6
364	*0	*0	*0	0	3|0	6

Sometimes this perf-drop is keeping 5 minutes. And I have found these at slow down.

  1. No slowdown or stall by L0 compaction delay.
  2. Slow down is happened in several seconds after manual compaction is called
    (2018-01-07T23:20:51.450+0900 D STORAGE [RocksCompactionThread] Starting compaction of range: 00000007 .. 000000075A51F65100008E0E (rangeDropped is 0)).

Now big collection drop, this huge perf-slowdown is not happened. So I think this is caused by (huge collection drop + manual compaction by MongoRocks).

I have used level-compaction and this is automatic (I think), then..

  1. Why do we need manual compaction from MongoRocks (https://github.com/mongodb-partners/mongo-rocks/blob/master/src/rocks_compaction_scheduler.cpp#L248-L265) ?
  2. And is this really caused by big collection drop ? and is this expected slow down ?
  3. Is there any way to avoid this manual compaction ?

This is stacktrace at the moment of slowdown.
mongorocks-slowdown-stack.txt

new feature request: use more db instead one db in rocksdb

hello, all:

I am using rocksdb engine for mongodb, and it works good

But in order to make manage of nodes better, we need to more db in rocksdb layer

Now all mongodb dbs store in rocksdb as one db, so we can not migrate one db to another machine using copy file, if param directoryperdb can be supported in rocksdb engine, this will help us

can you think about support --directoryperdb param, which means different mongo db store in different dir
thanks

drop collection?

rocksdb engine for mongodb , is it a bug the can't drop collection .?
it don't delete data in disk . so bad !!!

"File too large" error on most write ops

> db.Contracts4API_v3.ensureIndex({fz: 1, placingWayCode: 1, signDate: -1})
{
        "ok" : 0,
        "errmsg" : "IO error: While appending to file: /data/mongodb/db/070269.sst: File too large",
        "code" : 1,
        "codeName" : "InternalError"
}

The same error can be seen in background compaction log:

2018/01/03-01:37:32.183489 7f85302e8700 [db/compaction_job.cc:1437] [default] [JOB 3] Compacting 4@0 + 8@3 files to L3, score 1.00
2018/01/03-01:37:32.183501 7f85302e8700 [db/compaction_job.cc:1441] [default] Compaction start summary: Base version 2 Base level 0, inputs: [70255(16MB) 70253(40MB) 70251(53MB) 70249(48MB)], [70240(64MB) 70241(64MB) 70242(64MB) 70243(64MB) 70244(64MB) 70245(64MB) 70246(66MB) 70247(22MB)]
2018/01/03-01:37:32.183531 7f85302e8700 EVENT_LOG_v1 {"time_micros": 1514932652183514, "job": 3, "event": "compaction_started", "files_L0": [70255, 70253, 70251, 70249], "files_L3": [70240, 70241, 70242, 70243, 70244, 70245, 70246, 70247], "score": 1, "input_data_size": 664650099}
2018/01/03-01:37:55.800970 7f85302e8700 [WARN] [db/db_impl_compaction_flush.cc:1653] Compaction error: IO error: While appending to file: /data/mongodb/db/070269.sst: File too large
2018/01/03-01:37:55.800993 7f85302e8700 (Original Log Time 2018/01/03-01:37:55.800891) [db/compaction_job.cc:621] [default] compacted to: base level 3 max bytes base 536870912 files[4 0 0 8 23 183 1375] max score 1.00, MB/sec: 28.1 rd, 2.8 wr, level 3, files in(4, 8) out(1) MB in(159.3, 474.6) out(63.1), read-write-amplify(4.4) write-amplify(0.4) IO error: While appending to file: /data/mongodb/db/070269.sst: File too large, records in: 6606920, records dropped: 13
2018/01/03-01:37:55.801000 7f85302e8700 (Original Log Time 2018/01/03-01:37:55.800949) EVENT_LOG_v1 {"time_micros": 1514932675800921, "job": 3, "event": "compaction_finished", "compaction_time_micros": 23617298, "output_level": 3, "num_output_files": 1, "total_output_size": 66180284, "num_input_records": 6606920, "num_output_records": 6606907, "num_subcompactions": 1, "num_single_delete_mismatches": 0, "num_single_delete_fallthrough": 0, "lsm_state": [4, 0, 0, 8, 23, 183, 1375]}
2018/01/03-01:37:55.801007 7f85302e8700 [ERROR] [db/db_impl_compaction_flush.cc:1333] Waiting after background compaction error: IO error: While appending to file: /data/mongodb/db/070269.sst: File too large, Accumulated background error counts: 1
2018/01/03-01:37:56.813827 7f85302e8700 EVENT_LOG_v1 {"time_micros": 1514932676813817, "job": 3, "event": "table_file_deletion", "file_number": 70269}

The filesystem is ext4, target_file_size_base=67108864, rocksdb directory size is 107 GB and there are 53 GB free space in the filesystem. I don't quite understand what is happening here.

RPM binaries don't work

error: Failed dependencies:
    libc.so.6(GLIBC_2.14)(64bit) is needed by mongodb-org-mongos-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.14)(64bit) is needed by mongodb-org-mongos-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.15)(64bit) is needed by mongodb-org-mongos-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.18)(64bit) is needed by mongodb-org-mongos-3.0.4-1.el6.x86_64
    libc.so.6(GLIBC_2.14)(64bit) is needed by mongodb-org-server-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.14)(64bit) is needed by mongodb-org-server-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.15)(64bit) is needed by mongodb-org-server-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.18)(64bit) is needed by mongodb-org-server-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.19)(64bit) is needed by mongodb-org-server-3.0.4-1.el6.x86_64
    libc.so.6(GLIBC_2.14)(64bit) is needed by mongodb-org-shell-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.14)(64bit) is needed by mongodb-org-shell-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.15)(64bit) is needed by mongodb-org-shell-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.18)(64bit) is needed by mongodb-org-shell-3.0.4-1.el6.x86_64
    libc.so.6(GLIBC_2.14)(64bit) is needed by mongodb-org-tools-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.14)(64bit) is needed by mongodb-org-tools-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.15)(64bit) is needed by mongodb-org-tools-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.18)(64bit) is needed by mongodb-org-tools-3.0.4-1.el6.x86_64
    libstdc++.so.6(GLIBCXX_3.4.19)(64bit) is needed by mongodb-org-tools-3.0.4-1.el6.x86_64

From CentOS 6's latest libstdc++:

$ strings /usr/lib64/libstdc++.so.6.0.13 | grep GLIBCXX
GLIBCXX_3.4
GLIBCXX_3.4.1
GLIBCXX_3.4.2
GLIBCXX_3.4.3
GLIBCXX_3.4.4
GLIBCXX_3.4.5
GLIBCXX_3.4.6
GLIBCXX_3.4.7
GLIBCXX_3.4.8
GLIBCXX_3.4.9
GLIBCXX_3.4.10
GLIBCXX_3.4.11
GLIBCXX_3.4.12
GLIBCXX_3.4.13
GLIBCXX_FORCE_NEW
GLIBCXX_DEBUG_MESSAGE_LENGTH

compile failure in rocks_server_status.cpp

https://jira.mongodb.org/browse/SERVER-31853

2017/10/26 14:44:07.898] src/mongo/db/modules/rocksdb/src/rocks_server_status.cpp: In member function 'virtual mongo::BSONObj mongo::RocksServerStatusSection::generateSection(mongo::OperationContext*, const mongo::BSONElement&) const':
[2017/10/26 14:44:07.898] src/mongo/db/modules/rocksdb/src/rocks_server_status.cpp:73:76: error: no matching function for call to 'mongo::Lock::GlobalLock::GlobalLock(mongo::Locker*, mongo::LockMode, unsigned int)'
[2017/10/26 14:44:07.898]          Lock::GlobalLock lk(opCtx->lockState(), LockMode::MODE_IS, UINT_MAX);
[2017/10/26 14:44:07.898]                                                                             ^
[2017/10/26 14:44:07.898] In file included from src/mongo/db/modules/rocksdb/src/rocks_server_status.cpp:41:0:
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:195:9: note: candidate: mongo::Lock::GlobalLock::GlobalLock(mongo::OperationContext*, mongo::LockMode, unsigned int, mongo::Lock::GlobalLock::EnqueueOnly)
[2017/10/26 14:44:07.898]          GlobalLock(OperationContext* opCtx,
[2017/10/26 14:44:07.898]          ^
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:195:9: note:   candidate expects 4 arguments, 3 provided
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:186:9: note: candidate: mongo::Lock::GlobalLock::GlobalLock(mongo::Lock::GlobalLock&&)
[2017/10/26 14:44:07.898]          GlobalLock(GlobalLock&&);
[2017/10/26 14:44:07.898]          ^
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:186:9: note:   candidate expects 1 argument, 3 provided
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:185:9: note: candidate: mongo::Lock::GlobalLock::GlobalLock(mongo::OperationContext*, mongo::LockMode, unsigned int)
[2017/10/26 14:44:07.898]          GlobalLock(OperationContext* opCtx, LockMode lockMode, unsigned timeoutMs);
[2017/10/26 14:44:07.898]          ^
[2017/10/26 14:44:07.898] src/mongo/db/concurrency/d_concurrency.h:185:9: note:   no known conversion for argument 1 from 'mongo::Locker*' to 'mongo::OperationContext*'

TTL feature

Hi,

We use rocksdb in production with TTL layer for time-series data for autoexpire,
but now need's more features, like Mongo has (cluster, indexes, replica, etc).
Now we observing for different db solutions.
After studing mongo-rocks code, I found that TTL feature released in base (Mongo) and not use native rocksdb TTL.

my question's are:

  • whether possible implement an native rocksdb ttl mapping to mongo?
  • does an high-level TTL (in Mongo) implementation impact performance more than native (in Rocksdb) or doesn't worry?

Aleksander

Build Errors

Hi guys, I was following these steps to compile MongoDB with RocksDB storage engine:

# install compression libraries (zlib, bzip2, snappy):
sudo apt-get install zlib1g-dev; sudo apt-get install libbz2-dev; sudo apt-get install libsnappy-dev
# get rocksdb
git clone https://github.com/facebook/rocksdb.git
# compile rocksdb
cd rocksdb; make static_lib; sudo INSTALL_PATH=/usr make install; cd ..
# get mongo
git clone https://github.com/mongodb/mongo.git
# get mongorocks
git clone https://github.com/mongodb-partners/mongo-rocks
# add rocksdb module to mongo
mkdir -p mongo/src/mongo/db/modules/
ln -sf ~/mongo-rocks mongo/src/mongo/db/modules/rocks
# compile mongo
cd mongo; scons  //---->>>Error here

Here is the error message:

Compiling build/opt/mongo/db/modules/rocks/src/rocks_record_store_mongod.o
In file included from src/mongo/db/modules/rocks/src/rocks_record_store_mongod.cpp:51:0:
src/mongo/db/modules/rocks/src/rocks_record_store.h:131:38: error: invalid covariant return type for 'virtual mongo::StatusWith<mongo::RecordId> mongo::RocksRecordStore::updateRecord(mongo::OperationContext*, const mongo::RecordId&, const char*, int, bool, mongo::UpdateNotifier*)'
         virtual StatusWith<RecordId> updateRecord( OperationContext* txn,
                                      ^
In file included from src/mongo/db/catalog/index_catalog.h:39:0,
                 from src/mongo/db/catalog/collection.h:43,
                 from src/mongo/db/modules/rocks/src/rocks_record_store_mongod.cpp:38:
src/mongo/db/storage/record_store.h:396:20: error:   overriding 'virtual mongo::Status mongo::RecordStore::updateRecord(mongo::OperationContext*, const mongo::RecordId&, const char*, int, bool, mongo::UpdateNotifier*)'
     virtual Status updateRecord(OperationContext* txn,
                    ^
scons: *** [build/opt/mongo/db/modules/rocks/src/rocks_record_store_mongod.o] Error 1
scons: building terminated because of errors.
build/opt/mongo/db/modules/rocks/src/rocks_record_store_mongod.o failed: Error 1

Regards,
Hebron

Cache size question

Hi,

I have modified to allow cache size in megabytes. Reason for this is we run directory per db. That is a separate instance for each client. Since this is an LRU cache more frequently used docs will be in the block cache.

Is there any reason why you guys made the cache size start at 1 GB?

Regards.

Doubts about oplog deletions

I reviewed mongo-rocks carefully. But I've some doubts which I don't know whether you have considered.

Cursor is opened with a snapshot_id, and compaction won't physically delete data which may be used by a snapshot. So a staled slave will hold a cursor, and the cursor holds the snapshotid, which may leads to two problems
a:slave will never stale in mongorocks because deletions won't be visible and physical deletion won't happen since the snapshot id held by the slave's cursor is very small.
b:disk may be full since problem a

error: 'NUMBER_DB_SEEK' is not a member of 'rocksdb'

When I try to compile mongo with rocksdb module, i received following errors:

Compiling build/opt/mongo/db/modules/rocks/src/rocks_server_status.o
src/mongo/db/modules/rocks/src/rocks_server_status.cpp: In member function 'virtual mongo::BSONObj mongo::RocksServerStatusSection::generateSection(mongo::OperationContext_, const mongo::BSONElement&) const':
src/mongo/db/modules/rocks/src/rocks_server_status.cpp:184:14: error: 'NUMBER_DB_SEEK' is not a member of 'rocksdb'
{rocksdb::NUMBER_DB_SEEK, "num-seeks"},
^
src/mongo/db/modules/rocks/src/rocks_server_status.cpp:185:14: error: 'NUMBER_DB_NEXT' is not a member of 'rocksdb'
{rocksdb::NUMBER_DB_NEXT, "num-forward-iterations"},
^
src/mongo/db/modules/rocks/src/rocks_server_status.cpp:186:14: error: 'NUMBER_DB_PREV' is not a member of 'rocksdb'
{rocksdb::NUMBER_DB_PREV, "num-backward-iterations"},
^
src/mongo/db/modules/rocks/src/rocks_server_status.cpp:192:14: error: 'ITER_BYTES_READ' is not a member of 'rocksdb'
{rocksdb::ITER_BYTES_READ, "bytes-read-iteration"},
^
src/mongo/db/modules/rocks/src/rocks_server_status.cpp:196:11: error: could not convert '{{NUMBER_KEYS_WRITTEN, "num-keys-written"}, {NUMBER_KEYS_READ, "num-keys-read"}, {, "num-seeks"}, {, "num-forward-iterations"}, {, "num-backward-iterations"}, {BLOCK_CACHE_MISS, "block-cache-misses"}, {BLOCK_CACHE_HIT, "block-cache-hits"}, {BLOOM_FILTER_USEFUL, "bloom-filter-useful"}, {BYTES_WRITTEN, "bytes-written"}, {BYTES_READ, "bytes-read-point-lookup"}, {, "bytes-read-iteration"}, {FLUSH_WRITE_BYTES, "flush-bytes-written"}, {COMPACT_READ_BYTES, "compaction-bytes-read"}, {COMPACT_WRITE_BYTES, "compaction-bytes-written"}}' from '' to 'const std::vector<std::pair<rocksdb::Tickers, std::basic_string > >'
};
^
scons: *_* [build/opt/mongo/db/modules/rocks/src/rocks_server_status.o] Error 1
scons: building terminated because of errors.
build/opt/mongo/db/modules/rocks/src/rocks_server_status.o failed: Error 1

I am compiling from a fresh installation Centos 6.5.

Random failures of 'CheckReplOplog' in *_passthrough tests suites

Hello,

This issue affects all passthrough test suites. Here is the list (probably incomplete):

  • read_concern_majority_passthrough
  • aggregation_read_concern_majority_passthrough
  • replica_sets_jscore_passthrough
  • read_concern_linearizable_passthrough

This issue also affects both 3.4 and master branches. You can see example failures here (see the red lines in the left column on the pages):

Our investigation has shown that this issue is caused by this commit: 422336 by @AdallomRoy

We also have a possible fix here: #107
We test this fix now and so far our tests are good but it would be great to have more opinions and reviews.

The performance of mongo-rocks

Hi,
I have used YCSB to test the performance of mongo when storage engine is RocksDB and WiredTiger. the read/write ratio is 1/9 and the recordcount is 5000000. But I found the performance of rocksdb and wiredtiger seems no big difference.. I know that rocksdb is suitable for the scene writing much more than reading. But the result compared with wiredtiger seems not. I found the test results from github: https://github.com/wiredtiger/wiredtiger/wiki/YCSB-Mapkeeper-benchmark
So I want to know the purpose of mongodb supporting rocksdb. And the suitable scene for Rocksdb??

Thanks advance.

undefined reference to `rocksdb::Iterator::Iterator()'

Trying to build using the version v3.0.5-mongorocks produces this error after compiling for a very long time.

build/linux2/normal/mongo/db/storage/rocks/rocks_recovery_unit.o: In function RocksIterator': /opt/mongoRocks03/rocksdb/mongo/src/mongo/db/storage/rocks/rocks_recovery_unit.h:67: undefined reference torocksdb::Iterator::Iterator()'
/opt/mongoRocks03/rocksdb/mongo/src/mongo/db/storage/rocks/rocks_recovery_unit.h:67: undefined reference to `rocksdb::Iterator::Iterator()'
collect2: error: ld returned 1 exit status
scons: *** [build/linux2/normal/mongo/mongod] Error 1
scons: building terminated because of errors.

Drop database caused DB crash?

Hard to say if related to Mongo or to MongoRocks specifically, but I had both WT and MMAPv1 nodes connected to the same replicaSet, and they remained healthy.

Here is what I have from the log:

2015-12-07T10:07:28.136+0000 I COMMAND  [repl writer worker 11] dropDatabase Tenant_1000 starting
2015-12-07T10:07:28.151+0000 I -        [repl writer worker 11] Invariant failure entry->getTotalIndexCount(opCtx) == entry->getCompletedIndexCount(opCtx) src/mongo/db/storage/kv/kv_database_catalog_entry.cpp 323
2015-12-07T10:07:28.184+0000 I CONTROL  [repl writer worker 11]
 0x121d63e 0x11be10e 0x11a0662 0xef92ca 0xefd557 0x9c2754 0xafebd5 0xaf567a 0xaf688e 0xaf781c 0xe0d94c 0xe8c000 0xe91875 0x11b3505 0x12710bd 0x7f4858b51182 0x7f4857c4847d
----- BEGIN BACKTRACE -----
{"backtrace":[{"b":"400000","o":"E1D63E"},{"b":"400000","o":"DBE10E"},{"b":"400000","o":"DA0662"},{"b":"400000","o":"AF92CA"},{"b":"400000","o":"AFD557"},{"b":"400000","o":"5C2754"},{"b":"400000","o":"6FEBD5"},{"b":"400000","o":"6F567A"},{"b":"400000","o":"6F688E"},{"b":"400000","o":"6F781C"},{"b":"400000","o":"A0D94C"},{"b":"400000","o":"A8C000"},{"b":"400000","o":"A91875"},{"b":"400000","o":"DB3505"},{"b":"400000","o":"E710BD"},{"b":"7F4858B49000","o":"8182"},{"b":"7F4857B4E000","o":"FA47D"}],"processInfo":{ "mongodbVersion" : "3.0.7", "gitVersion" : "863349f8cafae87c1ab6fefdf2781107403181c5", "uname" : { "sysname" : "Linux", "release" : "3.13.0-52-generic", "version" : "#86-Ubuntu SMP Mon May 4 04:32:59 UTC 2015", "machine" : "x86_64" }, "somap" : [ { "elfType" : 2, "b" : "400000", "buildId" : "107E079760C1CD46B0AFFA2E9F8E22C45929D523" }, { "b" : "7FFDA3A0E000", "elfType" : 3, "buildId" : "F8E01F1BCB9F27FF04FC2B8C9454AD8EACB7B37C" }, { "b" : "7F4858D67000", "path" : "/lib/x86_64-linux-gnu/libbz2.so.1.0", "elfType" : 3, "buildId" : "E1031DDBFFE20367E874B7093EEC0C8D9F3B43F6" }, { "b" : "7F4858B49000", "path" : "/lib/x86_64-linux-gnu/libpthread.so.0", "elfType" : 3, "buildId" : "9318E8AF0BFBE444731BB0461202EF57F7C39542" }, { "b" : "7F4858941000", "path" : "/lib/x86_64-linux-gnu/librt.so.1", "elfType" : 3, "buildId" : "92FCF41EFE012D6186E31A59AD05BDBB487769AB" }, { "b" : "7F485873D000", "path" : "/lib/x86_64-linux-gnu/libdl.so.2", "elfType" : 3, "buildId" : "C1AE4CB7195D337A77A3C689051DABAA3980CA0C" }, { "b" : "7F4858430000", "path" : "/usr/lib/x86_64-linux-gnu/libstdc++.so.6", "elfType" : 3, "buildId" : "E21BCC61BDD4F9040CAE8DAC93E0B5AF94C61E16" }, { "b" : "7F485812A000", "path" : "/lib/x86_64-linux-gnu/libm.so.6", "elfType" : 3, "buildId" : "1D76B71E905CB867B27CEF230FCB20F01A3178F5" }, { "b" : "7F4857F13000", "path" : "/lib/x86_64-linux-gnu/libgcc_s.so.1", "elfType" : 3, "buildId" : "943D68EB5F75B05C52106573478422B3685D3BFB" }, { "b" : "7F4857B4E000", "path" : "/lib/x86_64-linux-gnu/libc.so.6", "elfType" : 3, "buildId" : "30C94DC66A1FE95180C3D68D2B89E576D5AE213C" }, { "b" : "7F4858F77000", "path" : "/lib64/ld-linux-x86-64.so.2", "elfType" : 3, "buildId" : "9F00581AB3C73E3AEA35995A0C50D24D59A01D47" } ] }}
 mongod(_ZN5mongo15printStackTraceERSo+0x3E) [0x121d63e]
 mongod(_ZN5mongo10logContextEPKc+0xEE) [0x11be10e]
 mongod(_ZN5mongo15invariantFailedEPKcS1_j+0xC2) [0x11a0662]
 mongod(_ZN5mongo22KVDatabaseCatalogEntry14dropCollectionEPNS_16OperationContextERKNS_10StringDataE+0x53A) [0xef92ca]
 mongod(_ZN5mongo15KVStorageEngine12dropDatabaseEPNS_16OperationContextERKNS_10StringDataE+0x2B7) [0xefd557]
 mongod(_ZN5mongo12dropDatabaseEPNS_16OperationContextEPNS_8DatabaseE+0x144) [0x9c2754]
 mongod(_ZN5mongo15CmdDropDatabase3runEPNS_16OperationContextERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x305) [0xafebd5]
 mongod(_ZN5mongo12_execCommandEPNS_16OperationContextEPNS_7CommandERKSsRNS_7BSONObjEiRSsRNS_14BSONObjBuilderEb+0x3A) [0xaf567a]
 mongod(_ZN5mongo7Command11execCommandEPNS_16OperationContextEPS0_iPKcRNS_7BSONObjERNS_14BSONObjBuilderEb+0xF7E) [0xaf688e]
 mongod(_ZN5mongo12_runCommandsEPNS_16OperationContextEPKcRNS_7BSONObjERNS_11_BufBuilderINS_16TrivialAllocatorEEERNS_14BSONObjBuilderEbi+0x66C) [0xaf781c]
 mongod(_ZN5mongo4repl21applyOperation_inlockEPNS_16OperationContextEPNS_8DatabaseERKNS_7BSONObjEbb+0x62C) [0xe0d94c]
 mongod(_ZN5mongo4repl8SyncTail9syncApplyEPNS_16OperationContextERKNS_7BSONObjEb+0x320) [0xe8c000]
 mongod(_ZN5mongo4repl14multiSyncApplyERKSt6vectorINS_7BSONObjESaIS2_EEPNS0_8SyncTailE+0x85) [0xe91875]
 mongod(_ZN5mongo10threadpool6Worker4loopERKSs+0x3E5) [0x11b3505]
 mongod(+0xE710BD) [0x12710bd]
 libpthread.so.0(+0x8182) [0x7f4858b51182]
 libc.so.6(clone+0x6D) [0x7f4857c4847d]
-----  END BACKTRACE  -----
2015-12-07T10:07:28.184+0000 I -        [repl writer worker 11]

***aborting after invariant() failure

Plz fix typos

bool cappedMaxDocs() const { invariant(_isCapped); return _cappedMaxDocs; }
bool cappedMaxSize() const { invariant(_isCapped); return _cappedMaxSize; }
the two lines are from master.
It seems the return type is not correct, may be a typo.

SIGILL after compiling rocksdb with PORTABLE=1

Hi,

The binaries do work on the target machine on which they were built, however if i try the binaries on a different machine same os and setup but with different processors, it does not work.

Could you please advice on this.

mongod(_ZN5mongo15printStackTraceERSo+0x32) [0x103d272]
mongod(+0xC3CB23) [0x103cb23]
mongod(+0xC3CE84) [0x103ce84]
libpthread.so.0(+0x10340) [0x7fd4afd02340]
mongod(ZN7rocksdb18OldInfoLogFileNameERKSsmS1_S1+0x9B) [0x155151b]
mongod(_ZN7rocksdb23CreateLoggerFromOptionsERKSsRKNS_9DBOptionsEPSt10shared_ptrINS_6LoggerEE+0x537) [0x15fc127]
mongod(_ZN7rocksdb15SanitizeOptionsERKSsRKNS_9DBOptionsE+0x777) [0x1515497]
mongod(_ZN7rocksdb6DBImplC1ERKNS_9DBOptionsERKSs+0x5D) [0x152939d]
mongod(ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKSsRKSt6vectorINS_22ColumnFamilyDescriptorESaIS7_EEPS6_IPNS_18ColumnFamilyHandleESaISD_EEPPS0+0xA3F) [0x152ebcf]
mongod(ZN7rocksdb2DB4OpenERKNS_7OptionsERKSsPPS0+0x156) [0x152fff6]
mongod(_ZN5mongo11RocksEngineC2ERKSsb+0x3E4) [0xe1b0f4]
mongod(+0xA26A54) [0xe26a54]
mongod(_ZN5mongo23GlobalEnvironmentMongoD22setGlobalStorageEngineERKSs+0x32A) [0xb2b1aa]
mongod(_ZN5mongo13initAndListenEi+0x421) [0x8aebc1]
mongod(main+0x159) [0x82b659]

It appears the invalid instruction happens in this function

std::string OldInfoLogFileName(const std::string& dbname, uint64_t ts,
const std::string& db_path, const std::string& log_dir) {
char buf[50];
snprintf(buf, sizeof(buf), "%llu", static_cast(ts));

if (log_dir.empty()) {
return dbname + "/LOG.old." + buf;
}

InfoLogPrefix info_log_prefix(true, db_path);
return log_dir + "/" + info_log_prefix.buf + ".old." + buf;
}

rocks_record_store_test fails with latest changes in master

storage_rocks_record_store_test fails with the changes made after r3.5.10.
Here is console snippet with failure:

2017-08-02T14:22:51.992+0300 I -        [main] going to run suite: ValidateTest
2017-08-02T14:22:51.992+0300 I -        [main] 	 going to run test: ValidateNonEmpty
2017-08-02T14:22:51.992+0300 I -        [main] Created temporary directory: /tmp/mongo-rocks-record-store-test-edd1-80c8-65ff-a441
2017-08-02T14:22:52.000+0300 E -        [main] Throwing exception: Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
2017-08-02T14:22:52.005+0300 I -        [main] FAIL: ValidateNonEmpty	Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
2017-08-02T14:22:52.005+0300 I -        [main] 	 going to run test: ValidateAndScanDataNonEmpty
2017-08-02T14:22:52.006+0300 I -        [main] Created temporary directory: /tmp/mongo-rocks-record-store-test-ba6a-2b19-97a7-9f08
2017-08-02T14:22:52.016+0300 E -        [main] Throwing exception: Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
2017-08-02T14:22:52.019+0300 I -        [main] FAIL: ValidateAndScanDataNonEmpty	Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
2017-08-02T14:22:52.022+0300 I -        [main] 	 going to run test: FullValidateNonEmptyAndScanData
2017-08-02T14:22:52.022+0300 I -        [main] Created temporary directory: /tmp/mongo-rocks-record-store-test-b13d-5357-0cf1-80ea
2017-08-02T14:22:52.035+0300 I -        [main] 	 DONE running tests
2017-08-02T14:22:52.035+0300 I -        [main] **************************************************
2017-08-02T14:22:52.035+0300 I -        [main] RecordStoreTestHarness         | tests:   53 | fails:    0 | assert calls:          0 | time secs:  0.840
2017-08-02T14:22:52.035+0300 I -        [main] RecordStore_CappedVisibility   | tests:    2 | fails:    0 | assert calls:          0 | time secs:  0.014
2017-08-02T14:22:52.035+0300 I -        [main] RocksRecordStoreTest           | tests:   10 | fails:    0 | assert calls:          0 | time secs:  0.132
2017-08-02T14:22:52.035+0300 I -        [main] ValidateTest                   | tests:    3 | fails:    2 | assert calls:          0 | time secs:  0.042
	ValidateNonEmpty	Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
	ValidateAndScanDataNonEmpty	Expected: 1 == _remain.erase(s) @src/mongo/db/storage/record_store_test_validate.h:51
2017-08-02T14:22:52.035+0300 I -        [main] TOTALS                         | tests:   68 | fails:    2 | assert calls:          0 | time secs:  1.028
2017-08-02T14:22:52.035+0300 I -        [main] Failing tests:
2017-08-02T14:22:52.035+0300 I -        [main] 	 ValidateTest/ValidateNonEmpty Failed
2017-08-02T14:22:52.035+0300 I -        [main] 	 ValidateTest/ValidateAndScanDataNonEmpty Failed
2017-08-02T14:22:52.035+0300 I -        [main] FAILURE - 2 tests in 1 suites failed

Build errors on Mac

Following the standard build instructions, XCode 7.3.1
Similar errors on master as well as 3.2 branch and r3.2.7 tag, when paired with similar branch/tag on mongo

src/mongo/db/modules/rocks/src/rocks_recovery_unit.cpp:65:31: error: moving a temporary object prevents copy elision [-Werror,-Wpessimizing-move] _nextPrefix(std::move(rocksGetNextPrefix(_prefix))), ^ src/mongo/db/modules/rocks/src/rocks_recovery_unit.cpp:65:31: note: remove std::move call here _nextPrefix(std::move(rocksGetNextPrefix(_prefix))), ^~~~~~~~~~ ~ Compiling build/opt/mongo/db/modules/rocks/src/rocks_transaction.o src/mongo/db/modules/rocks/src/rocks_record_store.cpp:206:59: error: moving a temporary object prevents copy elision [-Werror,-Wpessimizing-move] ? new RocksOplogKeyTracker(std::move(rocksGetNextPrefix(_prefix))) ^ src/mongo/db/modules/rocks/src/rocks_record_store.cpp:206:59: note: remove std::move call here ? new RocksOplogKeyTracker(std::move(rocksGetNextPrefix(_prefix))) ^~~~~~~~~~ ~ 1 error generated. scons: *** [build/opt/mongo/db/modules/rocks/src/rocks_recovery_unit.o] Error 1 src/mongo/db/modules/rocks/src/rocks_index.cpp:535:38: error: moving a temporary object prevents copy elision [-Werror,-Wpessimizing-move] std::string nextPrefix = std::move(rocksGetNextPrefix(_prefix)); ^ src/mongo/db/modules/rocks/src/rocks_index.cpp:535:38: note: remove std::move call here std::string nextPrefix = std::move(rocksGetNextPrefix(_prefix)); ^~~~~~~~~~ ~ 1 error generated. scons: *** [build/opt/mongo/db/modules/rocks/src/rocks_record_store.o] Error 1 1 error generated. scons: *** [build/opt/mongo/db/modules/rocks/src/rocks_index.o] Error 1 scons: building terminated because of errors.

[err] upgrading: mongo 3.2.9 -> 3.2.11 & rocksdb 4.9 -> 4.11.2

Hi,

I am getting an error while upgrading mongodb to 3.2.11, mongo-rocks to 3.2.11 & rocksdb to 4.11.2.

I am running the following script:

#!/bin/sh

# miscellaneous
apt-get update
apt-get install -y build-essential git binutils

# RocksDB
apt-get update
apt-get install -y libbz2-dev libsnappy-dev zlib1g-dev libzlcore-dev

git clone https://github.com/facebook/rocksdb.git
cd rocksdb
git checkout tags/v4.11.2
CXXFLAGS="-flto -Os -s" make -j$(nproc) shared_lib
make install

# MongoDB
apt-get update
apt-get install -y scons
git clone https://github.com/mongodb-partners/mongo-rocks.git /mongo-rocks
cd /mongo-rocks
git checkout tags/r3.2.11
git clone https://github.com/mongodb/mongo.git /mongo
cd /mongo
git checkout tags/r3.2.11
mkdir -p src/mongo/db/modules/
ln -sf /mongo-rocks src/mongo/db/modules/rocks
CXXFLAGS="-flto -Os -s" scons CPPPATH=/usr/local/include LIBPATH=/usr/local/lib -j$(nproc) --release --prefix=/usr --opt core  install

# purge
strip /usr/bin/mongo
strip /usr/bin/mongod
strip /usr/bin/mongos
strip /usr/bin/mongoperf
apt-get -y --purge autoremove build-essential git scons binutils
rm -rf /rocksdb
rm -rf /mongo-rocks
rm -rf /mongo
rm -f /usr/local/lib/librocksdb.a

And i am getting the following error:

src/mongo/db/modules/rocks/src/rocks_engine.cpp: In member function 'virtual mongo::RecordStore* mongo::RocksEngine::getRecordStore(mongo::OperationContext*, mongo::StringData, mongo::StringData, const mongo::CollectionOptions&)':
src/mongo/db/modules/rocks/src/rocks_engine.cpp:355:73: error: invalid new-expression of abstract class type 'mongo::RocksRecordStore'
                       options.cappedMaxDocs ? options.cappedMaxDocs : -1)
                                                                         ^
In file included from src/mongo/db/modules/rocks/src/rocks_engine.cpp:70:0:
src/mongo/db/modules/rocks/src/rocks_record_store.h:88:11: note:   because the following virtual functions are pure within 'mongo::RocksRecordStore':
     class RocksRecordStore : public RecordStore {
           ^
In file included from src/mongo/db/catalog/index_catalog.h:39:0,
                 from src/mongo/db/catalog/collection.h:43,
                 from src/mongo/db/index/index_descriptor.h:36,
                 from src/mongo/db/modules/rocks/src/rocks_engine.cpp:57:
src/mongo/db/storage/record_store.h:588:18: note: 	virtual void mongo::RecordStore::waitForAllEarlierOplogWritesToBeVisible(mongo::OperationContext*) const
     virtual void waitForAllEarlierOplogWritesToBeVisible(OperationContext* txn) const = 0;
                  ^
src/mongo/db/modules/rocks/src/rocks_engine.cpp:357:62: error: invalid new-expression of abstract class type 'mongo::RocksRecordStore'
                                        _getIdentPrefix(ident));

Data loss -- Fsync parent directory on file creation and rename

I am running a three node mongoDB cluster. I am using mongoDB 3.0.11 with rocksdb as storage engine. When I insert a new item into the store, I set w=3, j=True. When running strace on mongod, these are the file-system operations that happen on the node:

creat("data_dir/db/000004.sst")
append("data_dir/db/000004.sst")
fdatasync("data_dir/db/000004.sst")
creat("data_dir/db/MANIFEST-000005")
append("data_dir/db/MANIFEST-000005")
fdatasync("data_dir/db/MANIFEST-000005")
creat("data_dir/db/000005.dbtmp")
append("data_dir/db/000005.dbtmp")
fdatasync("data_dir/db/000005.dbtmp")
rename(source="data_dir/db/000005.dbtmp", dest="data_dir/db/CURRENT")
unlink("data_dir/db/MANIFEST-000001")
creat("data_dir/db/journal/000006.log")
unlink("data_dir/db/journal/000003.log")
fsync("data_dir/db")
trunc("data_dir/mongod.lock")
----client insert request----
append("data_dir/db/journal/000006.log")
----client ack----

When a new file is created or a file is renamed, the parent directory needs be explicitly fsynced to persist the new file. Please see this: https://www.quora.com/Linux/When-should-you-fsync-the-containing-directory-in-addition-to-the-file-itself and http://research.cs.wisc.edu/wind/Publications/alice-osdi14.pdf. The log file and any further appends to it might be lost if the node crashes and the new file is not persisted. If the crash happens on two or more nodes on a three node cluster, one of these nodes could become the leader and a global data loss is possible. We have reproduced this particular data loss issue using our testing framework.

If the sst file goes missing or manifest file goes missing on a subsequent crash as the directory is not fsynced, the node fails to start again. As a fix, it would be safe to fsync the parent directory on creat or rename of files. This could result in the cluster becoming unavailable for quorum writes.

start mongo with rocksdb error

I make all steps. but last I start mongod using the --storageEngine=rocksdb ,it occurs error.
command ./mongod --dbpath ~/mongo/data --storageEngine=rocksdb

Here is the detail ...

2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] MongoDB starting : pid=13857 port=27017 dbpath=/home/zhaoj/mongo/data 64-bit host=zhaoj
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] db version v3.5.9-130-g690302a
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] git version: 690302a49b61d5be3f4dcc285921eb362648055c
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] allocator: tcmalloc
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] modules: none
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] build environment:
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] distarch: x86_64
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] target_arch: x86_64
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] options: { storage: { dbPath: "/home/zhaoj/mongo/data", engine: "rocksdb" } }
2017-07-03T15:20:57.456+0800 I STORAGE [initandlisten] exception in initAndListen: 18656 Cannot start server with an unknown storage engine: rocksdb, terminating
2017-07-03T15:20:57.456+0800 I NETWORK [initandlisten] shutdown: going to close listening sockets...
2017-07-03T15:20:57.456+0800 I NETWORK [initandlisten] shutdown: going to flush diaglog...
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] now exiting
2017-07-03T15:20:57.456+0800 I CONTROL [initandlisten] shutting down with code:100

my mongo version is 3.4 and OS is ubuntu 16.04

High cpu usage

Hi,

Forgive me if this is not the correct place to post this. I'm trying mongo-rocks to see how it performs, and have set up a server with rocksdb as the storage engine to be a secondary of a replicaset. The initial sync took a while but finished successfully. However, now that the server is in the 'SECONDARY' state, it doesn't manage to stay in sync - the delay (difference between the optimeDate and the lastHeartbeat date) keeps growing. The CPU usage is always at close to 100%, while the other slaves in the replica (using mongodb 2.6.4, with the 'old' storage engine) stay up-to-date, with a CPU usage at around 13%.
While I thought it could be due to compression (trading cpu cycles for storage), the difference just seems too big.
All instances are on m3.large (2 cores) on AWS.
The compression algorithm is snappy.
The biggest collections are capped collections (don't know if it could have an impact).
Happy to provide any additional information you may need!

Allow setting cache size to < 1GB

First of all, thank you for an awesome library. :)

I am running MongoRocks on memory constrained environments, where IMO RocksDB performs better than WiredTiger, which tends to freeze and hang frequently. However, I can not set the cache size to let's say 0.256GB like I can do in WiredTiger.

Is there a possibility of making this happen?

Regards
Debdatta Basu

What happens if a snapshot is released but it is still in use by another thread?

If mongodb enables readMajority. SnapshotManager will clean unneeded snapshots via a bg thread, see

void RocksSnapshotManager::cleanupUnneededSnapshots()
RocksSnapshotManager::SnapshotHolder::~SnapshotHolder() {  
    if (snapshot != nullptr) {
     db->ReleaseSnapshot(snapshot); 
   }
}

code above are executed in mongodb's snapshotThread.
There is a commitedSnapshot in SnapshotManager, which will move forward by time. So the getCommittedSnapshot is variant.

Each time RecoverUnit begins a readTransaction with the option of readMajority, it gets the current commitedSnapshot from SnapshotManager which may be released by cleanupUnneededSnapshots

        options.snapshot = snapshot();
        return _db->Get(options, handle, key, value);
   const rocksdb::Snapshot* RocksRecoveryUnit::snapshot() {
        if (_readFromMajorityCommittedSnapshot) {
            if (_snapshotHolder.get() == nullptr) {
                _snapshotHolder = _snapshotManager->getCommittedSnapshot();
            }
            return _snapshotHolder->snapshot;
        }
        if (!_snapshot) {
            // RecoveryUnit might be used for writing, so we need to call recordSnapshotId().
            // Order of operations here is important. It needs to be synchronized with
            // _db->Write() and _transaction.commit()
            _transaction.recordSnapshotId();
            _snapshot = _db->GetSnapshot();
        }
        return _snapshot;
    }

void DBImpl::ReleaseSnapshot(const Snapshot* s) {
  const SnapshotImpl* casted_s = reinterpret_cast<const SnapshotImpl*>(s);
  {
    InstrumentedMutexLock l(&mutex_);
    snapshots_.Delete(casted_s);
  }
  delete casted_s;
} 

I strongly doubt that it is a problem which has never been reached because few guys use readMajority

Backup and restore

Hi,

I have been playing with MongoDB+RocksDB and would like to know a little more about the backup/restore process?

(a) Are backups executed via db.adminCommand({setParameter:1, rocksdbBackup: "/var/lib/mongodb/backup/1"}) incremental? IE, if I specify the same path everytime does it just backup the changes since the last backup? Assuming this is what checkpoint->CreateCheckpoint(path) means.

(b) Is there currently a way to restore backups via db.adminCommand? I wasn't able to find any calls within rocks_engine.cpp to backup_engine->RestoreDBFromLatestBackup?

(c) If there is no way to restore a backup via MongoDB, how would you do it manually via the filesystem/bash shell?

Thanks!

Ryan

Feature request: support full-precision, machine-readable numbers in server status output

Hey guys,

In a separate Prometheus MongoDB exporter project (https://github.com/Percona-Lab/prometheus_mongodb_exporter) we use some golang code to present the MongoRocks 'db.serverStatus().rocksdb' output as Prometheus metrics (for monitoring/graphing/etc). It is mostly working but I am running into 2 x major issues with the format of the stats.

Unfortunately as the RocksDB/MongoRocks output is in "human-readable" format my concern is while the string-parsing code I made to parse the stats ouput is currently working, it could very easily break if the output changes, making this a very "brittle"/hacky solution. For this I would like to request that these metrics are somehow presented in a regular, nested data structure that is machine-readable. This output format could be optional, however making it the default would seem to align a bit better with the rest of serverStatus.

Secondly, I am noticing the human format causes numbers to be heavily rounded, making it difficult or impossible to create time series on most counters in RocksDB because the precision is lost, an example:

A counter of 10,000,000 will become the string "10M" in the output (no decimal precision), meaning a time series will show no change in a given counter until there is 1 million changes, ie: the number goes from "10M" to "11M".

Of course, this causes any time series to show no activity and then one massive 1-million spike, which is essentially unusable. This is tracked in an issue on our github: percona/mongodb_exporter#24. Similar to my first issue, a machine-readable format with full precision ints or floats would resolve this issue for me, even if the number is an "estimation".

Cheers,

Tim

scons error: 'virtual int mongo::RocksEngine::flushAllFiles(mongo::OperationContext*, bool)' marked 'override', but does not override

I make all steps, but as last I fail when I type 'scons' and the error as follows:
src/mongo/db/modules/rocks/src/rocks_engine.h:114:21: error:

'virtual int mongo::RocksEngine::flushAllFiles(mongo::OperationContext*, bool)' marked 'override', but does not override
virtual int flushAllFiles(OperationContext* opCtx, bool sync) override;
^
In file included from src/mongo/db/modules/rocks/src/rocks_engine.h:47:0,
from src/mongo/db/modules/rocks/src/rocks_init.cpp:39:
src/mongo/db/storage/kv/kv_engine.h:94:17: error: 'virtual int mongo::KVEngine::flushAllFiles(bool)' was hidden [-Werror=overloaded-virtual]
virtual int flushAllFiles(bool sync) {
^
In file included from src/mongo/db/modules/rocks/src/rocks_init.cpp:39:0:
src/mongo/db/modules/rocks/src/rocks_engine.h:114:21: error: by 'virtual int mongo::RocksEngine::flushAllFiles(mongo::OperationContext*, bool)' [-Werror=overloaded-virtual]
virtual int flushAllFiles(OperationContext* opCtx, bool sync) override;

I checkout mongo-3.4.2. I do know that mongo source is not match mongo-rocks. But I do not know how to solve it.

drop collection or database does not free up disk space

We have large MongoDB database (about 400GB data file each day), MongoDB 3.2.6/3.2.7, engine rocksDB, operating system redhat 6.3.
After dropping the collection or database, the disk space used on the machine was not reclaimed.
After rming sst files which i think should by dropped by rocksdb,mongodb
can still work fine.
By the way,show dbs is not corresponding to du in linux.
Best Regards

SERVER-31681 compile_all

Looks like 783aae0 might have introduced a regression in compile_all (note that this has now pre-empted the existing failure catalogued in #106).

[2017/10/23 11:35:47.932] /opt/mongodbtoolchain/v2/bin/g++ -o build/cached/mongo/unittest/bson_test_util.o -c -Woverloaded-virtual -Wno-maybe-uninitialized -std=c++14 -fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fno-builtin-memcmp -DPCRE_STATIC -DNDEBUG -DBOOST_SYSTEM_NO_DEPRECATED -DBOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS -Isrc/third_party/pcre-8.41 -Isrc/third_party/boost-1.60.0 -I/data/mci/aea5e5ca0666b1c8eb4055582387bb3a/rocksdb/include -Ibuild/cached -Isrc src/mongo/unittest/bson_test_util.cpp
[2017/10/23 11:35:49.169] src/mongo/db/modules/rocksdb/src/rocks_server_status.cpp: In member function 'virtual mongo::BSONObj mongo::RocksServerStatusSection::generateSection(mongo::OperationContext*, const mongo::BSONElement&) const':
[2017/10/23 11:35:49.169] src/mongo/db/modules/rocksdb/src/rocks_server_status.cpp:73:29: error: 'txn' was not declared in this scope
[2017/10/23 11:35:49.169]          Lock::GlobalLock lk(txn->lockState(), LockMode::MODE_IS, UINT_MAX);
[2017/10/23 11:35:49.169]                              ^
[2017/10/23 11:35:51.145] /opt/mongodbtoolchain/v2/bin/g++ -o build/cached/mongo/unittest/death_test.o -c -Woverloaded-virtual -Wno-maybe-uninitialized -std=c++14 -fno-omit-frame-pointer -fno-strict-aliasing -ggdb -pthread -Wall -Wsign-compare -Wno-unknown-pragmas -Winvalid-pch -Werror -O2 -Wno-unused-local-typedefs -Wno-unused-function -Wno-deprecated-declarations -Wno-unused-but-set-variable -Wno-missing-braces -fno-builtin-memcmp -DPCRE_STATIC -DNDEBUG -DBOOST_SYSTEM_NO_DEPRECATED -DBOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS -Isrc/third_party/pcre-8.41 -Isrc/third_party/boost-1.60.0 -I/data/mci/aea5e5ca0666b1c8eb4055582387bb3a/rocksdb/include -Ibuild/cached -Isrc src/mongo/unittest/death_test.cpp
[2017/10/23 11:35:51.524] scons: *** [build/cached/mongo/db/modules/rocksdb/src/rocks_server_status.o] Error 1
[2017/10/23 11:36:05.509] scons: building terminated because of errors.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.