Coder Social home page Coder Social logo

classic-level's Introduction

classic-level

An abstract-level database backed by LevelDB. The successor to leveldown with builtin encodings, sublevels, events, promises and support of Uint8Array. If you are upgrading please see UPGRADING.md.

๐Ÿ“Œ Which module should I use? What is abstract-level? Head over to the FAQ.

level badge npm Node version Test Coverage Standard Common Changelog Donate

Table of Contents

Click to expand

Usage

const { ClassicLevel } = require('classic-level')

// Create a database
const db = new ClassicLevel('./db', { valueEncoding: 'json' })

// Add an entry with key 'a' and value 1
await db.put('a', 1)

// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])

// Get value of key 'a': 1
const value = await db.get('a')

// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
  console.log(value) // 2
}

All asynchronous methods also support callbacks.

Callback example
db.put('example', { hello: 'world' }, (err) => {
  if (err) throw err

  db.get('example', (err, value) => {
    if (err) throw err
    console.log(value) // { hello: 'world' }
  })
})

Usage from TypeScript requires generic type parameters.

TypeScript example
// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to ClassicLevel<string, string>.
const db = new ClassicLevel<string, any>('./db', { valueEncoding: 'json' })

// All relevant methods then use those types
await db.put('a', { x: 123 })

// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })

// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })

// It works the same for sublevels
const abc = db.sublevel('abc')
const xyz = db.sublevel<string, any>('xyz', { valueEncoding: 'json' })

Supported Platforms

We aim to support at least Active LTS and Current Node.js releases, Electron 5.0.0, as well as any future Node.js and Electron releases thanks to Node-API.

The classic-level npm package ships with prebuilt binaries for popular 64-bit platforms as well as ARM, M1, Android, Alpine (musl), Windows 32-bit, Linux flavors with an old glibc (Debian 8, Ubuntu 14.04, RHEL 7, CentOS 7) and is known to work on:

  • Linux, including ARM platforms such as Raspberry Pi and Kindle
  • Mac OS (10.7 and later)
  • Windows
  • FreeBSD

When installing classic-level, node-gyp-build will check if a compatible binary exists and fallback to compiling from source if it doesn't. In that case you'll need a valid node-gyp installation.

If you don't want to use the prebuilt binary for the platform you are installing on, specify the --build-from-source flag when you install. One of:

npm install --build-from-source
npm install classic-level --build-from-source

If you are working on classic-level itself and want to recompile the C++ code, run npm run rebuild.

Note: the Android prebuilds are made for and built against Node.js core rather than the nodejs-mobile fork.

API

The API of classic-level follows that of abstract-level with a few additional options and methods specific to LevelDB. The documentation below covers it all except for Encodings, Events and Errors which are exclusively documented in abstract-level.

An abstract-level and thus classic-level database is at its core a key-value database. A key-value pair is referred to as an entry here and typically returned as an array, comparable to Object.entries().

db = new ClassicLevel(location[, options])

Create a database or open an existing database. The location argument must be a directory path (relative or absolute) where LevelDB will store its files. If the directory does not yet exist (and options.createIfMissing is true) it will be created recursively. The optional options object may contain:

  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

See Encodings for a full description of these options. Other options (except passive) are forwarded to db.open() which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.

db.location

Read-only getter that returns the location string that was passed to the constructor (as-is).

db.status

Read-only getter that returns a string reflecting the current state of the database:

  • 'opening' - waiting for the database to be opened
  • 'open' - successfully opened the database
  • 'closing' - waiting for the database to be closed
  • 'closed' - successfully closed the database.

db.open([options][, callback])

Open the database. The callback function will be called with no arguments when successfully opened, or with a single error argument if opening failed. The database has an exclusive lock (on disk): if another process or instance has already opened the underlying LevelDB store at the given location then opening will fail with error code LEVEL_LOCKED. If no callback is provided, a promise is returned. Options passed to open() take precedence over options passed to the database constructor.

The optional options object may contain:

  • createIfMissing (boolean, default: true): If true, create an empty database if one doesn't already exist. If false and the database doesn't exist, opening will fail.
  • errorIfExists (boolean, default: false): If true and the database already exists, opening will fail.
  • passive (boolean, default: false): Wait for, but do not initiate, opening of the database.
  • multithreading (boolean, default: false): Allow multiple threads to access the database. This is only relevant when using worker threads

For advanced performance tuning, the options object may also contain the following. Modify these options only if you can prove actual benefit for your particular application.

Click to expand
  • compression (boolean, default: true): Unless set to false, all compressible data will be run through the Snappy compression algorithm before being stored. Snappy is very fast so leave this on unless you have good reason to turn it off.

  • cacheSize (number, default: 8 * 1024 * 1024): The size (in bytes) of the in-memory LRU cache with frequently used uncompressed block contents.

  • writeBufferSize (number, default: 4 * 1024 * 1024): The maximum size (in bytes) of the log (in memory and stored in the .log file on disk). Beyond this size, LevelDB will convert the log data to the first level of sorted table files. From LevelDB documentation:

    Larger values increase performance, especially during bulk loads. Up to two write buffers may be held in memory at the same time, so you may wish to adjust this parameter to control memory usage. Also, a larger write buffer will result in a longer recovery time the next time the database is opened.

  • blockSize (number, default: 4096): The approximate size of the blocks that make up the table files. The size relates to uncompressed data (hence "approximate"). Blocks are indexed in the table file and entry-lookups involve reading an entire block and parsing to discover the required entry.

  • maxOpenFiles (number, default: 1000): The maximum number of files that LevelDB is allowed to have open at a time. If your database is likely to have a large working set, you may increase this value to prevent file descriptor churn. To calculate the number of files required for your working set, divide your total data size by maxFileSize.

  • blockRestartInterval (number, default: 16): The number of entries before restarting the "delta encoding" of keys within blocks. Each "restart" point stores the full key for the entry, between restarts, the common prefix of the keys for those entries is omitted. Restarts are similar to the concept of keyframes in video encoding and are used to minimise the amount of space required to store keys. This is particularly helpful when using deep namespacing / prefixing in your keys.

  • maxFileSize (number, default: 2 * 1024 * 1024): The maximum amount of bytes to write to a file before switching to a new one. From LevelDB documentation:

    If your filesystem is more efficient with larger files, you could consider increasing the value. The downside will be longer compactions and hence longer latency / performance hiccups. Another reason to increase this parameter might be when you are initially populating a large database.

It's generally not necessary to call open() because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method like db.get() is called. It's also possible to reopen the database after it has been closed with close(). Once open() has then been called, any read & write operations will again be queued internally until opening has finished.

The open() and close() methods are idempotent. If the database is already open, the callback will be called in a next tick. If opening is already in progress, the callback will be called when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, if close() is called after open(), the database will be closed once opening has finished and the prior open() call will receive an error.

db.close([callback])

Close the database. The callback function will be called with no arguments if closing succeeded or with a single error argument if closing failed. If no callback is provided, a promise is returned.

A database has associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to call db.close() to free up resources. The underlying LevelDB store cannot be opened by multiple classic-level instances or processes simultaneously.

After db.close() has been called, no further read & write operations are allowed unless and until db.open() is called again. For example, db.get(key) will yield an error with code LEVEL_DATABASE_NOT_OPEN. Any unclosed iterators or chained batches will be closed by db.close() and can then no longer be used even when db.open() is called again.

A classic-level database waits for any pending operations to finish before closing. For example:

db.put('key', 'value', function (err) {
  // This happens first
})

db.close(function (err) {
  // This happens second
})

db.supports

A manifest describing the features supported by this database. Might be used like so:

if (!db.supports.permanence) {
  throw new Error('Persistent storage is required')
}

db.get(key[, options][, callback])

Get a value from the database by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to decode the value.
  • fillCache (boolean, default: true): Unless set to false, LevelDB will fill its in-memory LRU cache with data that was read.

The callback function will be called with an error if the operation failed. If the key was not found, the error will have code LEVEL_NOT_FOUND. If successful the first argument will be null and the second argument will be the value. If no callback is provided, a promise is returned.

A classic-level database supports snapshots (as indicated by db.supports.snapshots) which means db.get() should read from a snapshot of the database, created at the time db.get() was called. This means it should not see the data of simultaneous write operations. However, there's currently a small delay before the snapshot is created.

db.getMany(keys[, options][, callback])

Get multiple values from the database by an array of keys. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the keys.
  • valueEncoding: custom value encoding for this operation, used to decode values.
  • fillCache: same as described for db.get().

The callback function will be called with an error if the operation failed. If successful the first argument will be null and the second argument will be an array of values with the same order as keys. If a key was not found, the relevant value will be undefined. If no callback is provided, a promise is returned.

A classic-level database supports snapshots (as indicated by db.supports.snapshots) which means db.getMany() should read from a snapshot of the database, created at the time db.getMany() was called. This means it should not see the data of simultaneous write operations. However, there's currently a small delay before the snapshot is created.

db.put(key, value[, options][, callback])

Add a new entry or overwrite an existing entry. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.
  • sync (boolean, default: false): if set to true, LevelDB will perform a synchronous write of the data although the operation will be asynchronous as far as Node.js or Electron is concerned. Normally, LevelDB passes the data to the operating system for writing and returns immediately. In contrast, a synchronous write will use fsync() or equivalent, so the put() call will not complete until the data is actually on disk. Synchronous writes are significantly slower than asynchronous writes.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.del(key[, options][, callback])

Delete an entry by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • sync (boolean, default: false): same as described for db.put()

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.batch(operations[, options][, callback])

Perform multiple put and/or del operations in bulk. The operations argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.

Each operation must be an object with at least a type property set to either 'put' or 'del'. If the type is 'put', the operation must have key and value properties. It may optionally have keyEncoding and / or valueEncoding properties to encode keys or values with a custom encoding for just that operation. If the type is 'del', the operation must have a key property and may optionally have a keyEncoding property.

An operation of either type may also have a sublevel property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. Keys and values will be encoded by the sublevel, to the same effect as a sublevel.batch(..) call. In the following example, the first value will be encoded with 'json' rather than the default encoding of db:

const people = db.sublevel('people', { valueEncoding: 'json' })
const nameIndex = db.sublevel('names')

await db.batch([{
  type: 'put',
  sublevel: people,
  key: '123',
  value: {
    name: 'Alice'
  }
}, {
  type: 'put',
  sublevel: nameIndex,
  key: 'Alice',
  value: '123'
}])

The optional options object may contain:

  • keyEncoding: custom key encoding for this batch, used to encode keys.
  • valueEncoding: custom value encoding for this batch, used to encode values.
  • sync (boolean, default: false): same as described for db.put().

Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the 'utf8' encoding and the second with 'json'.

await db.batch([
  { type: 'put', key: 'a', value: 'foo' },
  { type: 'put', key: 'b', value: 123, valueEncoding: 'json' }
], { valueEncoding: 'utf8' })

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

chainedBatch = db.batch()

Create a chained batch, when batch() is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations. Depending on how it's used, it is possible to obtain greater performance with this form of batch().

await db.batch()
  .del('bob')
  .put('alice', 361)
  .put('kim', 220)
  .write()

iterator = db.iterator([options])

Create an iterator. The optional options object may contain the following range options to control the range of entries to be iterated:

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • reverse (boolean, default: false): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.
  • limit (number, default: Infinity): limit the number of entries yielded. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be returned instead of the lowest keys.

The gte and lte range options take precedence over gt and lt respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unless reverse is true). In addition to range options, the options object may contain:

  • keys (boolean, default: true): whether to return the key of each entry. If set to false, the iterator will yield keys that are undefined. Prefer to use db.keys() instead.
  • values (boolean, default: true): whether to return the value of each entry. If set to false, the iterator will yield values that are undefined. Prefer to use db.values() instead.
  • keyEncoding: custom key encoding for this iterator, used to encode range options, to encode seek() targets and to decode keys.
  • valueEncoding: custom value encoding for this iterator, used to decode values.
  • fillCache (boolean, default: false): if set to true, LevelDB will fill its in-memory LRU cache with data that was read.
  • highWaterMarkBytes (number, default: 16 * 1024): limit the amount of data that the iterator will hold in memory. Explained below.

About high water

While iterator.nextv(size) is reading entries from LevelDB into memory, it sums up the byte length of those entries. If and when that sum has exceeded highWaterMarkBytes, reading will stop. If nextv(2) would normally yield two entries but the first entry is too large, then only one entry will be yielded. More nextv(size) calls must then be made to get the remaining entries.

If memory usage is less of a concern, increasing highWaterMarkBytes can increase the throughput of nextv(size). If set to 0 then nextv(size) will never yield more than one entry, as highWaterMarkBytes will be exceeded on each call. It can not be set to Infinity. On key- and value iterators (see below) it applies to the byte length of keys or values respectively, rather than the combined byte length of keys and values.

Optimal performance can be achieved by setting highWaterMarkBytes to at least size multiplied by the expected byte length of an entry, ensuring that size is always met. In other words, that nextv(size) will not stop reading before size amount of entries have been read into memory. If the iterator is wrapped in a Node.js stream or Web Stream then the size parameter is dictated by the stream's highWaterMark option. For example:

const { EntryStream } = require('level-read-stream')

// If an entry is 50 bytes on average
const stream = new EntryStream(db, {
  highWaterMark: 1000,
  highWaterMarkBytes: 1000 * 50
})

Side note: the "watermark" analogy makes more sense in Node.js streams because its internal highWaterMark can grow, indicating the highest that the "water" has been. In a classic-level iterator however, highWaterMarkBytes is fixed once set. Getting exceeded does not change it.

The highWaterMarkBytes option is also applied to an internal cache that classic-level employs for next() and for await...of. When next() is called, that cache is populated with at most 1000 entries, or less than that if highWaterMarkBytes is exceeded by the total byte length of entries. To avoid reading too eagerly, the cache is not populated on the first next() call, or the first next() call after a seek(). Only on subsequent next() calls.

keyIterator = db.keys([options])

Create a key iterator, having the same interface as db.iterator() except that it yields keys instead of entries. If only keys are needed, using db.keys() may increase performance because values won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.keys() does not take keys, values and valueEncoding options.

// Iterate lazily
for await (const key of db.keys({ gt: 'a' })) {
  console.log(key)
}

// Get all at once. Setting a limit is recommended.
const keys = await db.keys({ gt: 'a', limit: 10 }).all()

valueIterator = db.values([options])

Create a value iterator, having the same interface as db.iterator() except that it yields values instead of entries. If only values are needed, using db.values() may increase performance because keys won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.values() does not take keys and values options. Note that it does take a keyEncoding option, relevant for the encoding of range options.

// Iterate lazily
for await (const value of db.values({ gt: 'a' })) {
  console.log(value)
}

// Get all at once. Setting a limit is recommended.
const values = await db.values({ gt: 'a', limit: 10 }).all()

db.clear([options][, callback])

Delete all entries or a range. Not guaranteed to be atomic. Accepts the following options (with the same rules as on iterators):

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default: false): delete entries in reverse order. Only effective in combination with limit, to delete the last N entries.
  • limit (number, default: Infinity): limit the number of entries to be deleted. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be deleted instead of the lowest keys.
  • keyEncoding: custom key encoding for this operation, used to encode range options.

The gte and lte range options take precedence over gt and lt respectively. If no options are provided, all entries will be deleted. The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

sublevel = db.sublevel(name[, options])

Create a sublevel that has the same interface as db (except for additional classic-level methods like db.approximateSize()) and prefixes the keys of operations before passing them on to db. The name argument is required and must be a string.

const example = db.sublevel('example')

await example.put('hello', 'world')
await db.put('a', '1')

// Prints ['hello', 'world']
for await (const [key, value] of example.iterator()) {
  console.log([key, value])
}

Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and realtime! Each sublevel is an AbstractLevel instance with its own keyspace, events and encodings. For example, it's possible to have one sublevel with 'buffer' keys and another with 'utf8' keys. The same goes for values. Like so:

db.sublevel('one', { valueEncoding: 'json' })
db.sublevel('two', { keyEncoding: 'buffer' })

An own keyspace means that sublevel.iterator() only includes entries of that sublevel, sublevel.clear() will only delete entries of that sublevel, and so forth. Range options get prefixed too.

Fully qualified keys (as seen from the parent database) take the form of prefix + key where prefix is separator + name + separator. If name is empty, the effective prefix is two separators. Sublevels can be nested: if db is itself a sublevel then the effective prefix is a combined prefix, e.g. '!one!!two!'. Note that a parent database will see its own keys as well as keys of any nested sublevels:

// Prints ['!example!hello', 'world'] and ['a', '1']
for await (const [key, value] of db.iterator()) {
  console.log([key, value])
}

๐Ÿ“Œ The key structure is equal to that of subleveldown which offered sublevels before they were built-in to abstract-level and thus classic-level. This means that an classic-level sublevel can read sublevels previously created with (and populated by) subleveldown.

Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on choice of encoding. Which is to say: binary keys are fully supported. The name must however always be a string and can only contain ASCII characters.

The optional options object may contain:

  • separator (string, default: '!'): Character for separating sublevel names from user keys and each other. Must sort before characters used in name. An error will be thrown if that's not the case.
  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

The keyEncoding and valueEncoding options are forwarded to the AbstractLevel constructor and work the same, as if a new, separate database was created. They default to 'utf8' regardless of the encodings configured on db. Other options are forwarded too but classic-level has no relevant options at the time of writing. For example, setting the createIfMissing option will have no effect. Why is that?

Like regular databases, sublevels open themselves but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. If the parent database is closed, then opening the sublevel will fail and subsequent operations on the sublevel will yield errors with code LEVEL_DATABASE_NOT_OPEN.

db.approximateSize(start, end[, options][, callback])

Get the approximate number of bytes of file system space used by the range [start..end). The result might not include recently written data. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode start and end.

The callback function will be called with a single error argument if the operation failed. If successful the first argument will be null and the second argument will be the approximate size as a number. If no callback is provided, a promise is returned. This method is an additional method that is not part of the abstract-level interface.

db.compactRange(start, end[, options][, callback])

Manually trigger a database compaction in the range [start..end]. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode start and end.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned. This method is an additional method that is not part of the abstract-level interface.

db.getProperty(property)

Get internal details from LevelDB. When issued with a valid property string, a string value is returned synchronously. Valid properties are:

  • leveldb.num-files-at-levelN: return the number of files at level N, where N is an integer representing a valid level (e.g. "0").
  • leveldb.stats: returns a multi-line string describing statistics about LevelDB's internal operation.
  • leveldb.sstables: returns a multi-line string describing all of the sstables that make up contents of the current database.

This method is an additional method that is not part of the abstract-level interface.

chainedBatch

chainedBatch.put(key, value[, options])

Queue a put operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY or LEVEL_INVALID_VALUE error if key or value is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.
  • sublevel (sublevel instance): act as though the put operation is performed on the given sublevel, to similar effect as sublevel.batch().put(key, value). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key and value will be encoded by the sublevel (using the default encodings of the sublevel unless keyEncoding and / or valueEncoding are provided).

chainedBatch.del(key[, options])

Queue a del operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY error if key is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • sublevel (sublevel instance): act as though the del operation is performed on the given sublevel, to similar effect as sublevel.batch().del(key). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key will be encoded by the sublevel (using the default key encoding of the sublevel unless keyEncoding is provided).

chainedBatch.clear()

Clear all queued operations on this batch.

chainedBatch.write([options][, callback])

Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.

The optional options object may contain:

  • sync (boolean, default: false): same as described for db.put().

Note that write() does not take encoding options. Those can only be set on put() and del() because classic-level synchronously forwards such calls to LevelDB and thus need keys and values to be encoded at that point.

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

After write() or close() has been called, no further operations are allowed.

chainedBatch.close([callback])

Free up underlying resources. This should be done even if the chained batch has zero queued operations. Automatically called by write() so normally not necessary to call, unless the intent is to discard a chained batch without committing it. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the batch is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

chainedBatch.length

The number of queued operations on the current batch.

chainedBatch.db

A reference to the database that created this chained batch.

iterator

An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys in lexicographic order (in other words: byte order) which in short means key 'a' comes before 'b' and key '10' comes before '2'.

An iterator reads from a snapshot of the database, created at the time db.iterator() was called. This means the iterator will not see the data of simultaneous write operations.

Iterators can be consumed with for await...of and iterator.all(), or by manually calling iterator.next() or nextv() in succession. In the latter case, iterator.close() must always be called. In contrast, finishing, throwing, breaking or returning from a for await...of loop automatically calls iterator.close(), as does iterator.all().

An iterator reaches its natural end in the following situations:

  • The end of the database has been reached
  • The end of the range has been reached
  • The last iterator.seek() was out of range.

An iterator keeps track of calls that are in progress. It doesn't allow concurrent next(), nextv() or all() calls (including a combination thereof) and will throw an error with code LEVEL_ITERATOR_BUSY if that happens:

// Not awaited and no callback provided
iterator.next()

try {
  // Which means next() is still in progress here
  iterator.all()
} catch (err) {
  console.log(err.code) // 'LEVEL_ITERATOR_BUSY'
}

for await...of iterator

Yields entries, which are arrays containing a key and value. The type of key and value depends on the options passed to db.iterator().

try {
  for await (const [key, value] of db.iterator()) {
    console.log(key)
  }
} catch (err) {
  console.error(err)
}

iterator.next([callback])

Advance to the next entry and yield that entry. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null, a key and a value. The type of key and value depends on the options passed to db.iterator(). If the iterator has reached its natural end, both key and value will be undefined.

If no callback is provided, a promise is returned for either an array (containing a key and value) or undefined if the iterator reached its natural end.

Note: iterator.close() must always be called once there's no intention to call next() or nextv() again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.

iterator.nextv(size[, options][, callback])

Advance repeatedly and get at most size amount of entries in a single call. Can be faster than repeated next() calls. The size argument must be an integer and has a soft minimum of 1. There are no options currently.

If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array. If no callback is provided, a promise is returned.

const iterator = db.iterator()

while (true) {
  const entries = await iterator.nextv(100)

  if (entries.length === 0) {
    break
  }

  for (const [key, value] of entries) {
    // ..
  }
}

await iterator.close()

iterator.all([options][, callback])

Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead use next(), nextv() or for await...of. There are no options currently. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. If no callback is provided, a promise is returned.

const entries = await db.iterator({ limit: 100 }).all()

for (const [key, value] of entries) {
  // ..
}

iterator.seek(target[, options])

Seek to the key closest to target. Subsequent calls to iterator.next(), nextv() or all() (including implicit calls in a for await...of loop) will yield entries with keys equal to or larger than target, or equal to or smaller than target if the reverse option passed to db.iterator() was true.

The optional options object may contain:

  • keyEncoding: custom key encoding, used to encode the target. By default the keyEncoding option of the iterator is used or (if that wasn't set) the keyEncoding of the database.

If range options like gt were passed to db.iterator() and target does not fall within that range, the iterator will reach its natural end.

iterator.close([callback])

Free up underlying resources. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the iterator is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

If a next() ,nextv() or all() call is in progress, closing will wait for that to finish. After close() has been called, further calls to next() ,nextv() or all() will yield an error with code LEVEL_ITERATOR_NOT_OPEN.

iterator.db

A reference to the database that created this iterator.

iterator.count

Read-only getter that indicates how many keys have been yielded so far (by any method) excluding calls that errored or yielded undefined.

iterator.limit

Read-only getter that reflects the limit that was set in options. Greater than or equal to zero. Equals Infinity if no limit, which allows for easy math:

const hasMore = iterator.count < iterator.limit
const remaining = iterator.limit - iterator.count

keyIterator

A key iterator has the same interface as iterator except that its methods yield keys instead of entries. For the keyIterator.next(callback) method, this means that the callback will receive two arguments (an error and key) instead of three. Usage is otherwise the same.

valueIterator

A value iterator has the same interface as iterator except that its methods yield values instead of entries. For the valueIterator.next(callback) method, this means that the callback will receive two arguments (an error and value) instead of three. Usage is otherwise the same.

sublevel

A sublevel is an instance of the AbstractSublevel class (as found in abstract-level) which extends AbstractLevel and thus has the same API as documented above, except for additional classic-level methods like db.approximateSize(). Sublevels have a few additional properties.

sublevel.prefix

Prefix of the sublevel. A read-only string property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.prefix) // '!example!'
console.log(nested.prefix) // '!example!!nested!'

sublevel.db

Parent database. A read-only property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.db === db) // true
console.log(nested.db === db) // true

ClassicLevel.destroy(location[, callback])

Completely remove an existing LevelDB database directory. You can use this method in place of a full directory removal if you want to be sure to only remove LevelDB-related files. If the directory only contains LevelDB files, the directory itself will be removed as well. If there are additional, non-LevelDB files in the directory, those files and the directory will be left alone.

The callback function will be called when the destroy operation is complete, with a possible error argument. If no callback is provided, a promise is returned. This method is an additional method that is not part of the abstract-level interface.

Before calling destroy(), close a database if it's using the same location:

const db = new ClassicLevel('./db')
await db.close()
await ClassicLevel.destroy('./db')

ClassicLevel.repair(location[, callback])

Attempt a restoration of a damaged database. It can also be used to perform a compaction of the LevelDB log into table files. From LevelDB documentation:

If a DB cannot be opened, you may attempt to call this method to resurrect as much of the contents of the database as possible. Some data may be lost, so be careful when calling this function on a database that contains important information.

The callback function will be called when the repair operation is complete, with a possible error argument. If no callback is provided, a promise is returned. This method is an additional method that is not part of the abstract-level interface.

You will find information on the repair operation in the LOG file inside the database directory.

Before calling repair(), close a database if it's using the same location.

Development

Getting Started

This repository uses git submodules. Clone it recursively:

git clone --recurse-submodules https://github.com/Level/classic-level.git

Alternatively, initialize submodules inside the working tree:

cd classic-level
git submodule update --init --recursive

Contributing

Level/classic-level is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Publishing

  1. Increment the version: npm version ..
  2. Push to GitHub: git push --follow-tags
  3. Wait for CI to complete
  4. Download prebuilds into ./prebuilds: npm run download-prebuilds
  5. Optionally verify loading a prebuild: npm run test-prebuild
  6. Optionally verify which files npm will include: canadian-pub
  7. Finally: npm publish

Donate

Support us with a monthly donation on Open Collective and help us continue our work.

License

MIT

classic-level's People

Contributors

chjj avatar cswendrowski avatar dependabot-preview[bot] avatar dependabot[bot] avatar dominictarr avatar duralog avatar filoozom avatar ggreer avatar greenkeeper[bot] avatar greenkeeperio-bot avatar heavyk avatar juliangruber avatar kesla avatar mafintosh avatar matthewkeil avatar max-mapper avatar mcollina avatar meirionhughes avatar mlix8hoblc avatar mscdex avatar obastemur avatar peakji avatar ralphtheninja avatar raynos avatar ronag avatar rvagg avatar sandfox avatar sharvil avatar thlorenz avatar vweevers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

classic-level's Issues

Segmentation Fault alpine linux in Docker on arm mac

I've faced with the Segmentation Fault on alpine linux running in docker on arm mac.

import { ClassicLevel } from 'classic-level';

const level = new ClassicLevel('./level', {
  keyEncoding: 'buffer',
  valueEncoding: 'buffer',
});

await level.put(Buffer.from('01', 'hex'), Buffer.from('01', 'hex'));

console.log('Done!');
FROM node:16-alpine

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

ENV NODE_ENV production
COPY package* /usr/src/app/
RUN npm ci
COPY . /usr/src/app/

CMD ["npm", "start"]
% docker build -t test . 
% docker run test

> [email protected] start
> node index.js

npm ERR! path /usr/src/app
npm ERR! command failed
npm ERR! signal SIGSEGV
npm ERR! command sh -c node index.js

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2022-05-26T19_40_15_807Z-debug-0.log

Support for readonly mode

It would be very helpful for our use case if classic-level were to support a readonly mode, where the database can be opened by multiple processes without compaction on startup, when those accessing the database do not need to make any changes.

There is an open issue for this feature on the upstream project already, though I believe that that project may be feature locked. I created this issue in the hopes that it may see some traction if such a feature is possible to implement on classic-level's level.

Snapshots caveats

There are some caveats regarding snapshots and read/write ordering that are not entirely obvious.

e.g. in the following case:

const promise = db.put(...)
await db.get(...)
await promise

The put will not be visible for following reads until after the put has been completed, i.e. it is not enough to schedule the write for it to be visible in reads, it actually has to be completed first.

Seek data race

I believe we have a data race for Seek. We execute seek in the main thread but call next in the worker thread. This means we concurrently access the same state under Iterator, in particular, didSeek_ and first_. Not sure how big of a problem it is and whether or not there are further issues with this.

`linux-arm` and `linux-arm64` prebuild comes with GLIBC which is too new for Raspberry Pi

The readme lists that this should work on Raspberry Pi, but we've traced down that due to a GLIBC issue, this is not the case

Reproduced on an ARM64 laptop running Ubuntu 18:

Error: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/cswendrowski/foundryvtt/resources/app/node_modules/classic-level/prebuilds/linux-arm64/node.napi.armv8.node)

Ubuntu 18 ships with GLIBC 2.27, Raspian and Pi OS 10 ships 2.28, and Pi OS 11 with 2.29

On an x64 machine also running Ubuntu 18, the prebundle comes compiled with an older version of GLIBC and runs correctly on 2.27

Any guidance or assistance would be appreciated

Auto open?

I might be misunderstanding something but I thought it was not necessary to open and wait for the db to open? However, I'm still getting a Database is not open error when doing something like:

const db = new ClassicLevel(...)
const data = db.getMany(...) /// kStatus === 'closed'

Are writes ordered?

I see that we are using napi_create_async_work. However, I can't read anywhere whether or not work items are guaranteed to be executed in order or not. If not then we might have a problem with write ordering, i.e. preceding writes overwriting following writes.

Do we need to create our own per db ordered work queue?

"database is not open" on NFS

we're (well.. the wife) users of the propriatary software mentioned in #63

they just say "must not use NFS" , and the reason appears somewhere in this library,
since leveldb itself works just fine on NFS.

When opening a db on NFS, we just get

FoundryVTT | 2023-07-18 18:46:15 | [error] Database is not open
Error: Database is not open
    at LevelDatabase.keys (/home/foundry/resources/app/node_modules/abstract-level/abstract-level.js:697:13)
    at LevelDatabase.compactFull (file:///home/foundry/resources/app/dist/database/backend/level-database.mjs:1:1647)
    at LevelDatabase.close (file:///home/foundry/resources/app/dist/database/backend/level-database.mjs:1:1171)
    at Module.disconnect (file:///home/foundry/resources/app/dist/database/database.mjs:1:1658)
    at World.deactivate (file:///home/foundry/resources/app/dist/packages/world.mjs:1:11206)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

unfortunately the relevant code is obfuscated.

strace reveals no obvious reason why NFS would be a relevant factor. there is no call to flock.

/home/foundry # strace -f -p 87 2>&1 | grep messages
[pid    87] access("/data/Data/worlds/kmkm/data/messages", F_OK) = 0
[pid    94] mkdir("/data/Data/worlds/kmkm/data/messages", 0777 <unfinished ...>
[pid    96] statx(AT_FDCWD, "/data/Data/worlds/kmkm/data/messages", AT_STATX_SYNC_AS_STAT, STATX_ALL,  <unfinished ...>
[pid    95] mkdir("/data/Data/worlds/kmkm/data/messages", 0755 <unfinished ...>
[pid    95] rename("/data/Data/worlds/kmkm/data/messages/LOG", "/data/Data/worlds/kmkm/data/messages/LOG.old" <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/LOG", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666 <unfinished ...>
[pid    95] mkdir("/data/Data/worlds/kmkm/data/messages", 0755 <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/LOCK", O_RDWR|O_CREAT|O_LARGEFILE, 0644 <unfinished ...>
[pid    95] access("/data/Data/worlds/kmkm/data/messages/CURRENT", F_OK <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/CURRENT", O_RDONLY|O_LARGEFILE) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages/MANIFEST-000002", O_RDONLY|O_LARGEFILE) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages", O_RDONLY|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000003.log", O_RDONLY|O_LARGEFILE) = 39
[pid    95] readv(39, [{iov_base="|\360\244_ \r\1\1\0\0\0\0\0\0\0\2\0\0\0\1\32!messages!J"..., iov_len=32767}, {iov_base="", iov_len=1024}], 2) = 3367
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.ldb", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 39
[pid    95] writev(39, [{iov_base="", iov_len=0}, {iov_base="\242\32\200\0\"\253\f!messages!JMUZweuXaLnRGbw"..., iov_len=1950}], 2) = 1950
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.ldb", O_RDONLY|O_LARGEFILE) = 39
[pid    95] stat("/data/Data/worlds/kmkm/data/messages/000005.ldb", {st_mode=S_IFREG|0644, st_size=2107, ...}) = 0
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.sst", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
[pid    95] unlink("/data/Data/worlds/kmkm/data/messages/000005.ldb") = 0

felddy/foundryvtt-docker#735

`classic-level` rebuild failed when `electron-forge package`

problem

I am using level as my local db server ,when pack my electron app ,I got this error and could not fix it .
I tried using a different version of electron ,but still the same

 make: *** No rule to make target `Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o', needed by `Release/leveldb.a'.  Stop.
  Error: `make` failed with exit code: 2
  at ChildProcess.onExit (/Users/neptune/github/xxx/app/node_modules/node-gyp/lib/build.js:203:23)
  at ChildProcess.emit (node:events:513:28)
  at ChildProcess._handle.onexit (node:internal/child_process:291:12)

An unhandled rejection has occurred inside Forge:
Error: node-gyp failed to rebuild '/private/var/folders/v6/51277h6j2258m8yvl9rknkhr0000gp/T/electron-packager/darwin-arm64/nnk-24h-live-darwin-arm64-MeCAK9/Electron.app/Contents/Resources/app/node_modules/classic-level'
at ChildProcess.<anonymous> (/Users/neptune/github/xxx/app/node_modules/@electron/rebuild/lib/module-type/node-gyp/node-gyp.js:118:24)
    at ChildProcess.emit (node:events:513:28)
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)

env

node version v18.15.0
electron abi 116
electron version v25.8.1
classic-level 1.3.0

deps

โ”œโ”€โ”€ @electron-forge/[email protected]
โ”œโ”€โ”€ @electron-forge/[email protected]
โ”œโ”€โ”€ @electron-forge/[email protected]
โ”œโ”€โ”€ @electron-forge/[email protected]
โ”œโ”€โ”€ @electron-forge/[email protected]
โ”œโ”€โ”€ @types/[email protected]
โ”œโ”€โ”€ @types/[email protected]
โ”œโ”€โ”€ @types/[email protected]
โ”œโ”€โ”€ @types/[email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ”œโ”€โ”€ [email protected]
โ””โ”€โ”€ [email protected]

QOL API methods

While we're contributing, we were wondering if you had any interest in any of the following quality of life methods we've added to our wrapper for ClassicLevel:

  1. compactFull - Convenience function that discovers the first and last key and calls compactRange
  2. size - Convenience function that discovers the first and last key and calls approximateSize
  3. has - Definitely a feature that would be better if it existed upstream, but iterates keys until specified key is discovered (if at all)

No native build was found for platform=darwin arch=arm64

I am working on Mac M1 chipset, after executing "prebuild-darwin-x64+arm64": "prebuildify -t 8.14.0 --napi --strip --arch x64+arm64",
I got this error:
No native build was found for platform=darwin arch=arm64 runtime=node abi=111 uv=1 armv=8 libc=glibc node=19.8.1

Possible memory leak when putting a value with the same key

When putting a value into the db with an existing key (and the same value) I see that the size of the db folder just grows. Expected behaviour is that it will remain the same size if I re-insert an entry as the old value is to be overwritten.

I use

store.put(key, value, { valueEncoding: "view" })

I get the same results using the batch put method.
I get the same results using sync: true

Deleteing the entry seems to clear up all the data that has accumulated for all insertions on the same key.

I am using version "1.3.0"

Avoid passing options objects to C++

If we have to pass say fillCache and asBuffer options from JS to C++, it's faster to pass those as boolean arguments, rather than a { fillCache, asBuffer } object.

In cases where we can't replace use of options objects, we can still optimize that, by replacing napi_has_named_property() and napi_get_named_property() with just one napi_get_named_property() call and checking if the return value is napi_ok.

Infinite compaction loop?

I'm trying to access an Ethereum DB which about 1TB in size.

Immediately on calling open(), the leveldb LOG file starts printing the following lines over and over again:

2023/01/25-19:00:25.972517 7f512de5e700 Compacting 1@2 + 8@3 files
2023/01/25-19:00:25.983790 7f512de5e700 Generated table #13798933@2: 1426 keys, 524408 bytes
2023/01/25-19:00:25.989548 7f512de5e700 Generated table #13798934@2: 1335 keys, 484178 bytes
2023/01/25-19:00:25.994915 7f512de5e700 Generated table #13798935@2: 1443 keys, 534255 bytes
2023/01/25-19:00:26.000565 7f512de5e700 Generated table #13798936@2: 1405 keys, 507921 bytes
2023/01/25-19:00:26.007436 7f512de5e700 Generated table #13798937@2: 1484 keys, 527112 bytes
2023/01/25-19:00:26.012568 7f512de5e700 Generated table #13798938@2: 1535 keys, 559004 bytes
2023/01/25-19:00:26.019553 7f512de5e700 Generated table #13798939@2: 2506 keys, 548458 bytes
2023/01/25-19:00:26.024701 7f512de5e700 Generated table #13798940@2: 2276 keys, 361713 bytes
2023/01/25-19:00:26.024714 7f512de5e700 Compacted 1@2 + 8@3 files => 4047049 bytes
2023/01/25-19:00:26.056418 7f512de5e700 compacted to: files[ 0 66 842 7135 65110 406111 105772 ]
2023/01/25-19:00:26.312935 7f512de5e700 Delete type=2 #13798056
2023/01/25-19:00:26.319244 7f512de5e700 Delete type=2 #13798932
2023/01/25-19:00:26.454692 7f512de5e700 Delete type=2 #13740731
2023/01/25-19:00:26.454883 7f512de5e700 Delete type=2 #13740732
2023/01/25-19:00:26.454990 7f512de5e700 Delete type=2 #13740733
2023/01/25-19:00:26.455085 7f512de5e700 Delete type=2 #13740734
2023/01/25-19:00:26.455182 7f512de5e700 Delete type=2 #13740735
2023/01/25-19:00:26.455278 7f512de5e700 Delete type=2 #13740736
2023/01/25-19:00:26.455373 7f512de5e700 Delete type=2 #13740737

Trying to read from the DB seems to take longer than usual, and the more I wait the slower the read speed becomes.

Any pointers at what to look at?
Can I somehow disable compaction when opening a DB?

Thanks!

Poll: should the location directory be created recursively?

Follow-up for #6.

In the following example, should the location directory be created recursively? Such that, if the foo directory does not exist, it will be created (on open) rather than yielding an error?

const db = new ClassicLevel('foo/bar')

Creating it recursively is the current behavior of classic-level (and new compared to leveldown) which may break expectations given typical filesystem behavior, or it could be a convenient feature, if the database is considered to abstract away the filesystem.

React with thumbs up to create the directory recursively, react with thumbs down to yield an error. This question is about what the default behavior should be, so I'm purposefully not including a poll option to make either behavior opt-in.

Build error?

Trying to build fails with missing snappy header.

classic-level$ npm i

> [email protected] install
> node-gyp-build

gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | darwin | arm64
gyp info find Python using Python version 3.9.10 found at "/opt/homebrew/opt/[email protected]/bin/python3.9"
gyp info spawn /opt/homebrew/opt/[email protected]/bin/python3.9
gyp info spawn args [
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args   'binding.gyp',
gyp info spawn args   '-f',
gyp info spawn args   'make',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/build/config.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/addon.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/Library/Caches/node-gyp/17.8.0/include/node/common.gypi',
gyp info spawn args   '-Dlibrary=shared_library',
gyp info spawn args   '-Dvisibility=default',
gyp info spawn args   '-Dnode_root_dir=/Users/ronagy/Library/Caches/node-gyp/17.8.0',
gyp info spawn args   '-Dnode_gyp_dir=/Users/ronagy/GitHub/classic-level/node_modules/node-gyp',
gyp info spawn args   '-Dnode_lib_file=/Users/ronagy/Library/Caches/node-gyp/17.8.0/<(target_arch)/node.lib',
gyp info spawn args   '-Dmodule_root_dir=/Users/ronagy/GitHub/classic-level',
gyp info spawn args   '-Dnode_engine=v8',
gyp info spawn args   '--depth=.',
gyp info spawn args   '--no-parallel',
gyp info spawn args   '--generator-output',
gyp info spawn args   'build',
gyp info spawn args   '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
  CXX(target) Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o
In file included from ../deps/leveldb/leveldb-1.20/db/builder.cc:7:
In file included from ../deps/leveldb/leveldb-1.20/db/filename.h:14:
In file included from ../deps/leveldb/leveldb-1.20/port/port.h:16:
../deps/leveldb/leveldb-1.20/port/port_posix.h:43:10: fatal error: 'snappy.h' file not found
#include <snappy.h>
         ^~~~~~~~~~
1 error generated.
make: *** [Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o] Error 1
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (node:events:527:28)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
gyp ERR! System Darwin 21.3.0
gyp ERR! command "/Users/ronagy/.nvm/versions/node/v17.8.0/bin/node" "/Users/ronagy/GitHub/classic-level/node_modules/.bin/node-gyp" "rebuild"
gyp ERR! cwd /Users/ronagy/GitHub/classic-level
gyp ERR! node -v v17.8.0
gyp ERR! node-gyp -v v9.0.0
gyp ERR! not ok 
npm ERR! code 1
npm ERR! path /Users/ronagy/GitHub/classic-level
npm ERR! command failed
npm ERR! command sh -c node-gyp-build

Meaning of "classic"

Hi, thank you for the very nice library.
Sorry, this might be stupid question.
What does the "classic" mean?
Is it not modern? Is there "new" that should be used?
I am confused because my English is poor.

what are the different between them

I'm trying to use levelDB in my program. You have done lots of brilliant jobs to introduce levelDB to nodejs. But I found the repository make me a little confuse. Maybe there should be a graph to make it straight to users which will explain the relationship between them. It just my little suggestion.

I am not sure but I guess their relationships should be like this: the abstract is the version completely written by asyn/await version of js, the leveldown is wrap with convenient interface upon the c++ binary leveldb, and the levelup introduce nodejs promise like feature to leveldown.

raw mode?

In order to get key + value we use entries which are an array of [key,val]. However, allocating an array for each key value pair is slow. Would be nice if we could somehow return a flattened array with key = xs[idx * 2 + 0], val = xs[idx * 2 + 1] and avoid O(n) array allocations.

When snapshot is created with `db.get()`

Hi, thank you for the great library.

I have a question about the document for snapshot with db.get().
https://github.com/Level/classic-level#dbgetkey-options-callback

The document said, the snapshot is created at the time db.get() was called.
I thought, this means that the snapshot is latest data for this calling.
However, the document also said, it should not see the data of simultaneous write operations.

Can the data be changed after db.get() was called (and the snapshot was created) and before db.get() returns the data?
That is, I understand that an iterator might return old data because that reads lazily it, but db.get() is not. Then I thought that db.get() returns latest data before something changes it.
Or, is there a way to get latest data?

Thank you.

Seek while next is executing

A little unsure but it seems the behavior of seeking while next is executing is a little undefined.

Seeking is done in the main thread and next in the worker thread so the order is currently possibly undefined, i.e. a seek that is done after next can affect the result of next.

Workers

I see that leveldb does support access from multiple threads. Wondering what possibilities this have for improved performance, I.e in theory we could allow access to the same db from multiple workers?

Specify a custom prebuilds location

My application is bundled for production use without node_modules installed. Currently classic-level looks for the binaries either in node_modules or in ./prebuilds (as far as I know). It would be nice to be able to specify a custom path upon initialization (e.g. as a config option).

A database modified in Linux fails to open in Windows?

We have been working with classic-level as the data backend for an application that supports Windows/Mac/Linux. The data stored by the application needs to be portable. We had not encountered problems with this, but I have today encountered a scary problem where a database most recently updated in a Linux environment can no longer be opened in Windows:

Context

  • The database was loaded and modified in Amazon Linux 2 using Node v16.19.0
  • The database is then attempted to be opened in Windows 11 using Node v18.13.0
  • Both environments using classic-level 1.2.0

Error

When attempting to call await db.open():

ModuleError: Failed to connect to database "actors": Database is not open
    at maybeOpened (C:\Users\aaclayton\Documents\Foundry\foundryvtt\node_modules\abstract-level\abstract-level.js:133:18)
    at C:\Users\aaclayton\Documents\Foundry\foundryvtt\node_modules\abstract-level\abstract-level.js:160:13
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
  code: 'LEVEL_DATABASE_NOT_OPEN',
: The filename, directory name, or volume label syntax is incorrect. C:\Users\aaclayton\Documents\Foundry\FoundryData\Data\worlds\ember-dev\data\actors/MANIFEST-000021
  ] {
    code: 'LEVEL_IO_ERROR'
  }
}

Contents of the database folder are as follows:

image

The CURRENT file does reference MANIFEST-000021.

This error is very alarming, because we had been operating under the expectation that the databases created and modified by LevelDB (and therefore classic-level) are portable. Is this expectation wrong? Are there any troubleshooting steps or further details that I can provide to help diagnose the problem?

The absolute path (location) which I'm providing to the ClassicLevel constructor is C:\Users\aaclayton\Documents\Foundry\FoundryData\Data\worlds\ember-dev\data\actors. Something internal to ClassicLevel (or LevelDB) is appending the manifest file to this path but using a / separator rather than a windows path separator. I don't know whether this is part of the issue or innocuous.

This error is not reproducible with every DB I create in a Linux environment and transfer to windows. I have had success moving databases around before (in either direction). I have not encountered this error before but it is highly troubling. Thank you very much for your guidance.

Error While Installing Hardhat

Hi All,
Facing an Issue while Installing Hardhat using yarn add --dev hardhat, there is an error:

error ./node_modules/classic-level: Command failed.
Exit code: 127
Command: node-gyp-build

Even tried using npm, Got same error.
npm Version of Error:

npm ERR! code 127
npm ERR! path ./node_modules/classic-level
npm ERR! command failed
npm ERR! command sh -c node-gyp-build
npm ERR! sh: 1: node-gyp-build: not found

My Node version: v18.14.0

Compacting the database on close: advice or feature request

Context

Our application opens and closes databases during its lifecycle. When we are done working with a certain database we would like to compact it so that it is closed in the most disk-space efficient format possible. It appears to me that the database is compacted when it is first opened, but not when it is closed.

Feature Request

Ideally, the db.close() method would support an option for doing this as part of the close workflow, something like:

await db.close({compact: true});

Other Solutions

We tried to perform compaction manually but encountered some unexpected outcomes and consequences. If the above feature request is not viable some advice would be helpful. My thought was to create a function that would compact the entire key range of the DB, as follows:

  /**
   * Compact the entire database.
   * See https://github.com/Level/classic-level#dbcompactrangestart-end-options-callback
   * @returns {Promise<void>}
   */
  async compactFull() {
    const i0 = this.keys({limit: 1, fillCache: false});
    const k0 = await i0.next();
    const i1 = this.keys({limit: 1, reverse: true, fillCache: false});
    const k1 = await i1.next();
    return this.compactRange(k0, k1, {keyEncoding: "utf8"});
  }

Unfortunately, calling this method is producing the opposite effect of what I had anticipated - disk utilization is nearly doubled. Here are the file sizes I measured:

// db.open()
1,088,289 bytes

// Modify some records
1,089,415 bytes

// Call compactFull()
2,245,000 bytes

// db.close()
2,245,000 bytes

// db.open()
1,156,988 bytes

// Modify some records
1,157,189 bytes

// Call db.compactFull()
2,244,613 bytes

// db.close()
2,244,613 bytes

// db.open()
1,088,522 bytes

Is there a flaw in the way I am using the compactRange method? Is there a misunderstanding of what should happen when compacting a range? Is there some other way to solve our use case?

Thank you for guidance!

missing try/catch for bad_alloc

Leveldb doesn't throw exceptions per se. But both the bindings and leveldb itself uses stl containers which uses allocators that throw bad_alloc. We should probably have a try/catch for each entry point catching bad allocs and returning the corresponding leveldb::Status.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.