Coder Social home page Coder Social logo

abstract-level's Introduction

level

Universal abstract-level database for Node.js and browsers. This is a convenience package that exports classic-level in Node.js and browser-level in browsers, making it an ideal entry point to start creating lexicographically sorted key-value databases.

๐Ÿ“Œ Which module should I use? What is abstract-level? Head over to the FAQ.

level badge npm Node version Test Coverage Standard Common Changelog Community Donate

Table of Contents

Click to expand

Usage

If you are upgrading: please see UPGRADING.md.

const { Level } = require('level')

// Create a database
const db = new Level('example', { valueEncoding: 'json' })

// Add an entry with key 'a' and value 1
await db.put('a', 1)

// Add multiple entries
await db.batch([{ type: 'put', key: 'b', value: 2 }])

// Get value of key 'a': 1
const value = await db.get('a')

// Iterate entries with keys that are greater than 'a'
for await (const [key, value] of db.iterator({ gt: 'a' })) {
  console.log(value) // 2
}

All asynchronous methods also support callbacks.

Callback example
db.put('a', { x: 123 }, function (err) {
  if (err) throw err

  db.get('a', function (err, value) {
    console.log(value) // { x: 123 }
  })
})

TypeScript type declarations are included and cover the methods that are common between classic-level and browser-level. Usage from TypeScript requires generic type parameters.

TypeScript example
// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to Level<string, string>.
const db = new Level<string, any>('./db', { valueEncoding: 'json' })

// All relevant methods then use those types
await db.put('a', { x: 123 })

// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })

// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })

// It works the same for sublevels
const abc = db.sublevel('abc')
const xyz = db.sublevel<string, any>('xyz', { valueEncoding: 'json' })

Install

With npm do:

npm install level

For use in browsers, this package is best used with browserify, webpack, rollup or similar bundlers. For a quick start, visit browserify-starter or webpack-starter.

Supported Platforms

At the time of writing, level works in Node.js 12+ and Electron 5+ on Linux, Mac OS, Windows and FreeBSD, including any future Node.js and Electron release thanks to Node-API, including ARM platforms like Raspberry Pi and Android, as well as in Chrome, Firefox, Edge, Safari, iOS Safari and Chrome for Android. For details, see Supported Platforms of classic-level and Browser Support of browser-level.

Binary keys and values are supported across the board.

API

The API of level follows that of abstract-level. The documentation below covers it all except for Encodings, Events and Errors which are exclusively documented in abstract-level. For options and additional methods specific to classic-level and browser-level, please see their respective READMEs.

An abstract-level and thus level database is at its core a key-value database. A key-value pair is referred to as an entry here and typically returned as an array, comparable to Object.entries().

db = new Level(location[, options])

Create a new database or open an existing database. The location argument must be a directory path (relative or absolute) where LevelDB will store its files, or in browsers, the name of the IDBDatabase to be opened.

The optional options object may contain:

  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

See Encodings for a full description of these options. Other options (except passive) are forwarded to db.open() which is automatically called in a next tick after the constructor returns. Any read & write operations are queued internally until the database has finished opening. If opening fails, those queued operations will yield errors.

db.status

Read-only getter that returns a string reflecting the current state of the database:

  • 'opening' - waiting for the database to be opened
  • 'open' - successfully opened the database
  • 'closing' - waiting for the database to be closed
  • 'closed' - successfully closed the database.

db.open([callback])

Open the database. The callback function will be called with no arguments when successfully opened, or with a single error argument if opening failed. If no callback is provided, a promise is returned. Options passed to open() take precedence over options passed to the database constructor. The createIfMissing and errorIfExists options are not supported by browser-level.

The optional options object may contain:

  • createIfMissing (boolean, default: true): If true, create an empty database if one doesn't already exist. If false and the database doesn't exist, opening will fail.
  • errorIfExists (boolean, default: false): If true and the database already exists, opening will fail.
  • passive (boolean, default: false): Wait for, but do not initiate, opening of the database.

It's generally not necessary to call open() because it's automatically called by the database constructor. It may however be useful to capture an error from failure to open, that would otherwise not surface until another method like db.get() is called. It's also possible to reopen the database after it has been closed with close(). Once open() has then been called, any read & write operations will again be queued internally until opening has finished.

The open() and close() methods are idempotent. If the database is already open, the callback will be called in a next tick. If opening is already in progress, the callback will be called when that has finished. If closing is in progress, the database will be reopened once closing has finished. Likewise, if close() is called after open(), the database will be closed once opening has finished and the prior open() call will receive an error.

db.close([callback])

Close the database. The callback function will be called with no arguments if closing succeeded or with a single error argument if closing failed. If no callback is provided, a promise is returned.

A database may have associated resources like file handles and locks. When the database is no longer needed (for the remainder of a program) it's recommended to call db.close() to free up resources.

After db.close() has been called, no further read & write operations are allowed unless and until db.open() is called again. For example, db.get(key) will yield an error with code LEVEL_DATABASE_NOT_OPEN. Any unclosed iterators or chained batches will be closed by db.close() and can then no longer be used even when db.open() is called again.

db.supports

A manifest describing the features supported by this database. Might be used like so:

if (!db.supports.permanence) {
  throw new Error('Persistent storage is required')
}

db.get(key[, options][, callback])

Get a value from the database by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to decode the value.

The callback function will be called with an error if the operation failed. If the key was not found, the error will have code LEVEL_NOT_FOUND. If successful the first argument will be null and the second argument will be the value. If no callback is provided, a promise is returned.

db.getMany(keys[, options][, callback])

Get multiple values from the database by an array of keys. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the keys.
  • valueEncoding: custom value encoding for this operation, used to decode values.

The callback function will be called with an error if the operation failed. If successful the first argument will be null and the second argument will be an array of values with the same order as keys. If a key was not found, the relevant value will be undefined. If no callback is provided, a promise is returned.

db.put(key, value[, options][, callback])

Add a new entry or overwrite an existing entry. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.del(key[, options][, callback])

Delete an entry by key. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.

The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

db.batch(operations[, options][, callback])

Perform multiple put and/or del operations in bulk. The operations argument must be an array containing a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation.

Each operation must be an object with at least a type property set to either 'put' or 'del'. If the type is 'put', the operation must have key and value properties. It may optionally have keyEncoding and / or valueEncoding properties to encode keys or values with a custom encoding for just that operation. If the type is 'del', the operation must have a key property and may optionally have a keyEncoding property.

An operation of either type may also have a sublevel property, to prefix the key of the operation with the prefix of that sublevel. This allows atomically committing data to multiple sublevels. Keys and values will be encoded by the sublevel, to the same effect as a sublevel.batch(..) call. In the following example, the first value will be encoded with 'json' rather than the default encoding of db:

const people = db.sublevel('people', { valueEncoding: 'json' })
const nameIndex = db.sublevel('names')

await db.batch([{
  type: 'put',
  sublevel: people,
  key: '123',
  value: {
    name: 'Alice'
  }
}, {
  type: 'put',
  sublevel: nameIndex,
  key: 'Alice',
  value: '123'
}])

The optional options object may contain:

  • keyEncoding: custom key encoding for this batch, used to encode keys.
  • valueEncoding: custom value encoding for this batch, used to encode values.

Encoding properties on individual operations take precedence. In the following example, the first value will be encoded with the 'utf8' encoding and the second with 'json'.

await db.batch([
  { type: 'put', key: 'a', value: 'foo' },
  { type: 'put', key: 'b', value: 123, valueEncoding: 'json' }
], { valueEncoding: 'utf8' })

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

chainedBatch = db.batch()

Create a chained batch, when batch() is called with zero arguments. A chained batch can be used to build and eventually commit an atomic batch of operations. Depending on how it's used, it is possible to obtain greater performance with this form of batch(). On browser-level however, it is just sugar.

await db.batch()
  .del('bob')
  .put('alice', 361)
  .put('kim', 220)
  .write()

iterator = db.iterator([options])

Create an iterator. The optional options object may contain the following range options to control the range of entries to be iterated:

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be iterated. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be iterated. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries iterated will be the same.
  • reverse (boolean, default: false): iterate entries in reverse order. Beware that a reverse seek can be slower than a forward seek.
  • limit (number, default: Infinity): limit the number of entries yielded. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be returned instead of the lowest keys.

The gte and lte range options take precedence over gt and lt respectively. If no range options are provided, the iterator will visit all entries of the database, starting at the lowest key and ending at the highest key (unless reverse is true). In addition to range options, the options object may contain:

  • keys (boolean, default: true): whether to return the key of each entry. If set to false, the iterator will yield keys that are undefined. Prefer to use db.keys() instead.
  • values (boolean, default: true): whether to return the value of each entry. If set to false, the iterator will yield values that are undefined. Prefer to use db.values() instead.
  • keyEncoding: custom key encoding for this iterator, used to encode range options, to encode seek() targets and to decode keys.
  • valueEncoding: custom value encoding for this iterator, used to decode values.

๐Ÿ“Œ To instead consume data using streams, see level-read-stream and level-web-stream.

keyIterator = db.keys([options])

Create a key iterator, having the same interface as db.iterator() except that it yields keys instead of entries. If only keys are needed, using db.keys() may increase performance because values won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.keys() does not take keys, values and valueEncoding options.

// Iterate lazily
for await (const key of db.keys({ gt: 'a' })) {
  console.log(key)
}

// Get all at once. Setting a limit is recommended.
const keys = await db.keys({ gt: 'a', limit: 10 }).all()

valueIterator = db.values([options])

Create a value iterator, having the same interface as db.iterator() except that it yields values instead of entries. If only values are needed, using db.values() may increase performance because keys won't have to fetched, copied or decoded. Options are the same as for db.iterator() except that db.values() does not take keys and values options. Note that it does take a keyEncoding option, relevant for the encoding of range options.

// Iterate lazily
for await (const value of db.values({ gt: 'a' })) {
  console.log(value)
}

// Get all at once. Setting a limit is recommended.
const values = await db.values({ gt: 'a', limit: 10 }).all()

db.clear([options][, callback])

Delete all entries or a range. Not guaranteed to be atomic. Accepts the following options (with the same rules as on iterators):

  • gt (greater than) or gte (greater than or equal): define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • lt (less than) or lte (less than or equal): define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse is true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default: false): delete entries in reverse order. Only effective in combination with limit, to delete the last N entries.
  • limit (number, default: Infinity): limit the number of entries to be deleted. This number represents a maximum number of entries and will not be reached if the end of the range is reached first. A value of Infinity or -1 means there is no limit. When reverse is true the entries with the highest keys will be deleted instead of the lowest keys.
  • keyEncoding: custom key encoding for this operation, used to encode range options.

The gte and lte range options take precedence over gt and lt respectively. If no options are provided, all entries will be deleted. The callback function will be called with no arguments if the operation was successful or with an error if it failed. If no callback is provided, a promise is returned.

sublevel = db.sublevel(name[, options])

Create a sublevel that has the same interface as db (except for additional methods specific to classic-level or browser-level) and prefixes the keys of operations before passing them on to db. The name argument is required and must be a string.

const example = db.sublevel('example')

await example.put('hello', 'world')
await db.put('a', '1')

// Prints ['hello', 'world']
for await (const [key, value] of example.iterator()) {
  console.log([key, value])
}

Sublevels effectively separate a database into sections. Think SQL tables, but evented, ranged and real-time! Each sublevel is an AbstractLevel instance with its own keyspace, events and encodings. For example, it's possible to have one sublevel with 'buffer' keys and another with 'utf8' keys. The same goes for values. Like so:

db.sublevel('one', { valueEncoding: 'json' })
db.sublevel('two', { keyEncoding: 'buffer' })

An own keyspace means that sublevel.iterator() only includes entries of that sublevel, sublevel.clear() will only delete entries of that sublevel, and so forth. Range options get prefixed too.

Fully qualified keys (as seen from the parent database) take the form of prefix + key where prefix is separator + name + separator. If name is empty, the effective prefix is two separators. Sublevels can be nested: if db is itself a sublevel then the effective prefix is a combined prefix, e.g. '!one!!two!'. Note that a parent database will see its own keys as well as keys of any nested sublevels:

// Prints ['!example!hello', 'world'] and ['a', '1']
for await (const [key, value] of db.iterator()) {
  console.log([key, value])
}

๐Ÿ“Œ The key structure is equal to that of subleveldown which offered sublevels before they were built-in to abstract-level. This means that an abstract-level sublevel can read sublevels previously created with (and populated by) subleveldown.

Internally, sublevels operate on keys that are either a string, Buffer or Uint8Array, depending on parent database and choice of encoding. Which is to say: binary keys are fully supported. The name must however always be a string and can only contain ASCII characters.

The optional options object may contain:

  • separator (string, default: '!'): Character for separating sublevel names from user keys and each other. Must sort before characters used in name. An error will be thrown if that's not the case.
  • keyEncoding (string or object, default 'utf8'): encoding to use for keys
  • valueEncoding (string or object, default 'utf8'): encoding to use for values.

The keyEncoding and valueEncoding options are forwarded to the AbstractLevel constructor and work the same, as if a new, separate database was created. They default to 'utf8' regardless of the encodings configured on db. Other options are forwarded too but abstract-level (and therefor level) has no relevant options at the time of writing. For example, setting the createIfMissing option will have no effect. Why is that?

Like regular databases, sublevels open themselves but they do not affect the state of the parent database. This means a sublevel can be individually closed and (re)opened. If the sublevel is created while the parent database is opening, it will wait for that to finish. If the parent database is closed, then opening the sublevel will fail and subsequent operations on the sublevel will yield errors with code LEVEL_DATABASE_NOT_OPEN.

chainedBatch

chainedBatch.put(key, value[, options])

Queue a put operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY or LEVEL_INVALID_VALUE error if key or value is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • valueEncoding: custom value encoding for this operation, used to encode the value.
  • sublevel (sublevel instance): act as though the put operation is performed on the given sublevel, to similar effect as sublevel.batch().put(key, value). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key and value will be encoded by the sublevel (using the default encodings of the sublevel unless keyEncoding and / or valueEncoding are provided).

chainedBatch.del(key[, options])

Queue a del operation on this batch, not committed until write() is called. This will throw a LEVEL_INVALID_KEY error if key is invalid. The optional options object may contain:

  • keyEncoding: custom key encoding for this operation, used to encode the key.
  • sublevel (sublevel instance): act as though the del operation is performed on the given sublevel, to similar effect as sublevel.batch().del(key). This allows atomically committing data to multiple sublevels. The key will be prefixed with the prefix of the sublevel, and the key will be encoded by the sublevel (using the default key encoding of the sublevel unless keyEncoding is provided).

chainedBatch.clear()

Clear all queued operations on this batch.

chainedBatch.write([options][, callback])

Commit the queued operations for this batch. All operations will be written atomically, that is, they will either all succeed or fail with no partial commits.

There are no options (that are common between classic-level and browser-level). Note that write() does not take encoding options. Those can only be set on put() and del().

The callback function will be called with no arguments if the batch was successful or with an error if it failed. If no callback is provided, a promise is returned.

After write() or close() has been called, no further operations are allowed.

chainedBatch.close([callback])

Free up underlying resources. This should be done even if the chained batch has zero queued operations. Automatically called by write() so normally not necessary to call, unless the intent is to discard a chained batch without committing it. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the batch is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

chainedBatch.length

The number of queued operations on the current batch.

chainedBatch.db

A reference to the database that created this chained batch.

iterator

An iterator allows one to lazily read a range of entries stored in the database. The entries will be sorted by keys in lexicographic order (in other words: byte order) which in short means key 'a' comes before 'b' and key '10' comes before '2'.

A classic-level iterator reads from a snapshot of the database, created at the time db.iterator() was called. This means the iterator will not see the data of simultaneous write operations. A browser-level iterator does not offer such guarantees, as is indicated by db.supports.snapshots. That property will be true in Node.js and false in browsers.

Iterators can be consumed with for await...of and iterator.all(), or by manually calling iterator.next() or nextv() in succession. In the latter case, iterator.close() must always be called. In contrast, finishing, throwing, breaking or returning from a for await...of loop automatically calls iterator.close(), as does iterator.all().

An iterator reaches its natural end in the following situations:

  • The end of the database has been reached
  • The end of the range has been reached
  • The last iterator.seek() was out of range.

An iterator keeps track of calls that are in progress. It doesn't allow concurrent next(), nextv() or all() calls (including a combination thereof) and will throw an error with code LEVEL_ITERATOR_BUSY if that happens:

// Not awaited and no callback provided
iterator.next()

try {
  // Which means next() is still in progress here
  iterator.all()
} catch (err) {
  console.log(err.code) // 'LEVEL_ITERATOR_BUSY'
}

for await...of iterator

Yields entries, which are arrays containing a key and value. The type of key and value depends on the options passed to db.iterator().

try {
  for await (const [key, value] of db.iterator()) {
    console.log(key)
  }
} catch (err) {
  console.error(err)
}

iterator.next([callback])

Advance to the next entry and yield that entry. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null, a key and a value. The type of key and value depends on the options passed to db.iterator(). If the iterator has reached its natural end, both key and value will be undefined.

If no callback is provided, a promise is returned for either an entry array (containing a key and value) or undefined if the iterator reached its natural end.

Note: iterator.close() must always be called once there's no intention to call next() or nextv() again. Even if such calls yielded an error and even if the iterator reached its natural end. Not closing the iterator will result in memory leaks and may also affect performance of other operations if many iterators are unclosed and each is holding a snapshot of the database.

iterator.nextv(size[, options][, callback])

Advance repeatedly and get at most size amount of entries in a single call. Can be faster than repeated next() calls. The size argument must be an integer and has a soft minimum of 1. There are no options at the moment.

If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. The natural end of the iterator will be signaled by yielding an empty array. If no callback is provided, a promise is returned.

const iterator = db.iterator()

while (true) {
  const entries = await iterator.nextv(100)

  if (entries.length === 0) {
    break
  }

  for (const [key, value] of entries) {
    // ..
  }
}

await iterator.close()

iterator.all([options][, callback])

Advance repeatedly and get all (remaining) entries as an array, automatically closing the iterator. Assumes that those entries fit in memory. If that's not the case, instead use next(), nextv() or for await...of. There are no options at the moment. If an error occurs, the callback function will be called with an error. Otherwise, the callback receives null and an array of entries, where each entry is an array containing a key and value. If no callback is provided, a promise is returned.

const entries = await db.iterator({ limit: 100 }).all()

for (const [key, value] of entries) {
  // ..
}

iterator.seek(target[, options])

Seek to the key closest to target. Subsequent calls to iterator.next(), nextv() or all() (including implicit calls in a for await...of loop) will yield entries with keys equal to or larger than target, or equal to or smaller than target if the reverse option passed to db.iterator() was true.

The optional options object may contain:

  • keyEncoding: custom key encoding, used to encode the target. By default the keyEncoding option of the iterator is used or (if that wasn't set) the keyEncoding of the database.

If range options like gt were passed to db.iterator() and target does not fall within that range, the iterator will reach its natural end.

iterator.close([callback])

Free up underlying resources. The callback function will be called with no arguments. If no callback is provided, a promise is returned. Closing the iterator is an idempotent operation, such that calling close() more than once is allowed and makes no difference.

If a next() ,nextv() or all() call is in progress, closing will wait for that to finish. After close() has been called, further calls to next() ,nextv() or all() will yield an error with code LEVEL_ITERATOR_NOT_OPEN.

iterator.db

A reference to the database that created this iterator.

iterator.count

Read-only getter that indicates how many keys have been yielded so far (by any method) excluding calls that errored or yielded undefined.

iterator.limit

Read-only getter that reflects the limit that was set in options. Greater than or equal to zero. Equals Infinity if no limit, which allows for easy math:

const hasMore = iterator.count < iterator.limit
const remaining = iterator.limit - iterator.count

keyIterator

A key iterator has the same interface as iterator except that its methods yield keys instead of entries. For the keyIterator.next(callback) method, this means that the callback will receive two arguments (an error and key) instead of three. Usage is otherwise the same.

valueIterator

A value iterator has the same interface as iterator except that its methods yield values instead of entries. For the valueIterator.next(callback) method, this means that the callback will receive two arguments (an error and value) instead of three. Usage is otherwise the same.

sublevel

A sublevel is an instance of the AbstractSublevel class, which extends AbstractLevel and thus has the same API as documented above. Sublevels have a few additional properties.

sublevel.prefix

Prefix of the sublevel. A read-only string property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.prefix) // '!example!'
console.log(nested.prefix) // '!example!!nested!'

sublevel.db

Parent database. A read-only property.

const example = db.sublevel('example')
const nested = example.sublevel('nested')

console.log(example.db === db) // true
console.log(nested.db === db) // true

Contributing

Level/level is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Donate

Support us with a monthly donation on Open Collective and help us continue our work.

License

MIT

abstract-level's People

Contributors

achingbrain avatar andrewrk avatar calvinmetcalf avatar deanlandolt avatar dependabot[bot] avatar dominictarr avatar flatheadmill avatar greenkeeper[bot] avatar greenkeeperio-bot avatar hden avatar huan avatar hugomrdias avatar imsingee avatar juliangruber avatar kesla avatar mafintosh avatar marcuslyons avatar max-mapper avatar mcollina avatar meirionhughes avatar nolanlawson avatar ralphtheninja avatar raynos avatar rvagg avatar sandersn avatar shama avatar tapppi avatar timoxley avatar vweevers avatar watson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

abstract-level's Issues

clear without limit

rocksdb has a very nice DeleteRange API which is non blocking and would be great for clear. However, there are two problems:

  • Open end is not supported. (Could be hack emulate with '\fffffffffffffff')
  • Limit is not supported.

Any chance of making these optional in the level API?

Add test that `iterator.seek()` upholds snapshot guarantee

I found a bug in memdown / memory-level: its iterator._seek() reads from the current (aka latest) state of the database, rather than the state from when the iterator was created.

This should be covered by a test in the abstract test suite.

Keep track of iterator end in `_nextv()` fallback

Currently, nextv() can only signal the natural end of an iterator by yielding an empty array. In the fallback _nextv() implementation below, when _next() signals end by yielding an undefined key and/or value (line 147), this means _nextv() does not signal end (line 148, unless acc is empty). Consequently, a consumer will call nextv() again, which means we call _next() again (line 160) which has already signaled end.

Whether that's a problem depends on implementation. It is in many-level (the replacement for multileveldown that I'm working on atm, and can easily have its own fix) and could be in others too. It's not a problem for implementations written so far (memory-level, classic-level, browser-level) because those implement their own optimized _nextv().

Solution: set a new private kEnded property to true on line 148, and check that it's false before calling _next(). A seek() should reset it to false. Maybe use kEnded in more places, as a general protection.

_nextv (size, options, callback) {
const acc = []
const onnext = (err, key, value) => {
if (err) {
return callback(err)
} else if (this[kLegacy] ? key === undefined && value === undefined : key === undefined) {
return callback(null, acc)
}
acc.push(this[kLegacy] ? [key, value] : key)
if (acc.length === size) {
callback(null, acc)
} else {
this._next(onnext)
}
}
this._next(onnext)
}

Setup SauceLabs

Same as abstract-leveldown. We have the necessary workflows and dependencies here, just need to check if I can reuse the account and then configure secrets.

Make iterator._seek async

I'm writing a module implementing abstract-level over cacache. One issue is that the api call in cacache to get index entries returns a promise, so it's currently not possible AFAICS to implement iterators given that seek is not async.

Is there a workaround using a DeferredIterator ?

relax test for simple iterator()

See https://github.com/Level/abstract-leveldown/blob/master/abstract/iterator-test.js#L125-L128 , also rewrite the test a bit, since it doesn't really make sense to do e.g.

if (key && value) {
  t.ok(key, 'key exists')
  t.ok(value, 'value exists')
}

Maybe instead use a counter that unless the counter is at the end, then key and value should be set, if not, then check for undefined on err, key and value.

Also, remove https://github.com/Level/abstract-leveldown/blob/master/abstract/ranges-test.js#L14-L48 since it's basically the same test and doesn't add any value

Fix Sauce Labs CI failure

See last run. Something broke in airtap (I think unrelated to changes here), causing a minipass stream to have a .buffer that is a string instead of an array:

/home/runner/work/abstract-level/abstract-level/node_modules/minipass/index.js:628
    this.buffer.length = 0
                       ^
TypeError: Cannot assign to read only property 'length' of string ''
    at Parser.destroy (/home/runner/work/abstract-level/abstract-level/node_modules/minipass/index.js:628:24)
    at Parser.oncomplete (/home/runner/work/abstract-level/abstract-level/node_modules/tap-completed/index.js:46:7)
    at Parser.emit (node:events:539:35)
    at Parser.emit (/home/runner/work/abstract-level/abstract-level/node_modules/minipass/index.js:483:23)
    at Parser.emitComplete (/home/runner/work/abstract-level/abstract-level/node_modules/tap-completed/node_modules/tap-parser/index.js:584:12)
    at Parser.end (/home/runner/work/abstract-level/abstract-level/node_modules/tap-completed/node_modules/tap-parser/index.js:555:10)
    at listOnTimeout (node:internal/timers:559:17)
    at processTimers (node:internal/timers:502:7)

Hopefully I can reproduce it on the tests of https://github.com/vweevers/tap-completed

Refactor `keys()` and `values()` internals

Follow-up for #12, to be tackled in a next major version. As written there:

Adds a lot of new code, with unfortunately some duplicate code because I wanted to avoid mixins and other forms of complexity, which means key and value iterators use classes that are separate from preexisting iterators. For example, a method like _seek() must now be implemented on three classes: AbstractIterator, AbstractKeyIterator and AbstractValueIterator. This (small?) problem extends to implementations and their subclasses, if they choose to override key and value iterators to improve performance.

To come up with a more DRY approach, it may help to first reduce the differences between the 3 iterators. Mainly: change the callback signature of AbstractIterator#next() from (err, key, value) to (err, entry). Perhaps (dare I say) remove callbacks altogether.

If we can then merge the 3 classes into one, or at least have a shared and reusable base class, then unit tests can probably be simplified too, not having to repeat like so:

for (const mode of ['iterator', 'keys', 'values']) {
test(`for await...of ${mode}()`, async function (t) {
t.plan(1)
const it = db[mode]({ keyEncoding: 'utf8', valueEncoding: 'utf8' })
const output = []
for await (const item of it) {
output.push(item)
}
t.same(output, input.map(({ key, value }) => {
return mode === 'iterator' ? [key, value] : mode === 'keys' ? key : value
}))
})

Lastly (unrelated but I postponed it because of the next() callback signature and to avoid more breaking changes) perhaps rename iterator() to entries().

Confused about encoding

i'm resolving failing tests for cacache-level and have a question about encodings.

As I understand it, when I declare valueEncoding and keyEncoding in my constructor, abstract-level makes sure that the values passed to my implementation are properly converted in and out of storage.

According to test failures, it seems that this applies to set methods and not get. IOW, values sent to the database are encoded properly, but are not decoded for the consumer. Either that or the tests themselves do not account for encodings.

As a concrete example, test simple put() always fails, because I specify valueEncoding as buffer (which the underlying lib returns), but the test compares against a plain string value without decoding. In addition, the encoding options sent to _get are exactly as set in the constructor, so there's no work to be done as far as the implementer can tell.

Any thoughts ?

Release `abstract-level` and dependents

  • Level/community#109
  • abstract-level (replacement for abstract-leveldown and more)
    • 1.0.0
  • memory-level (replacement for memdown and level-mem)
    • Create repository
    • Replace links to level-mem and memdown in Level/community
    • 1.0.0
  • level-read-stream
  • classic-level (replacement for leveldown)
  • browser-level (replacement for level-js)
    • Create repository
    • Replace links to level-js in Level/community
    • 1.0.0
  • level
  • rocks-level (replacement for rocksdb and level-rocksdb)
    • Create repository
    • Diff against leveldown, copy classic-level, apply diff
    • 1.0.0
  • many-level (replacement for multileveldown)
  • rave-level (replacement for level-party, depends on level@8)

Side tasks:

clear with limit

I'm working on some bindings and run into some trouble with implementing clear with limit.

Currently I'm able to implement all write operations synchronously by just batching the mutations in memory and applying them during reads and then asynchronously flushing them to disk. This is great and allows me to guarantee synchronous and immediate read after write.

However, it's not possible to implement this for clear with limit as the limit parameter forces me to inspect the current state of the db, i.e. it's not possible to lazily apply the operation during reads.

My question here is what is the use of db.clear with limit? Can we make it optional and make bindings that do not implement still "valid" bindings. I guess this would mostly require some updates to the test suite so that bindings can opt out and still pass the tests?

Project Status?

This doesn't seem to be actively maintained anymore. Any plans to continue development?

Tracking issue: planned breaking changes in v2

In order of significance:

  • #50
    • Replace use of process.nextTick() with queueMicrotask()
  • #49
  • #45
    • Follow-up: #53
  • Remove batch, put and del events (postponed)
  • #51
  • #13
  • #54
  • Support hooks and events on db.clear(). I don't know how yet, but may require breaking changes. (postponed)
  • Remove _checkKey() and _checkValue(), an abstract-leveldown leftover that has questionable value because after these methods are called, we run keys and values through encodings. In level-transcoder we should throw if the json encoding returns undefined, as a replacement for undefined checks. Other encodings don't need such checks. (postponed)
  • #48
  • Throw error in default private API methods that must be overridden. (postponed; can be semver-minor)
  • #56 (not breaking per se)
  • #52

These won't necessarily land in a single major version. I wanted a place to list them and allow people to object.

Not possible to implement batch(ops) in terms of chained batch

I would expect to be able to implement batch(ops) in terms of a chained batch. However, that is not possible given the combination of abstract chained batch and the tests, i.e.

  _batch (operations, options, callback) {
    const batch = this.batch()
    for (const op of operations) {
      if (op.type === 'del') {
        if ('key' in op) {
          batch.del(op.key, op)
        }
      } else if (op.type === 'put') {
        if ('key' in op && 'value' in op) {
          batch.put(op.key, op.value, op)
        }
      }
    }
    batch.write(options, callback)
  }

Will fail two tests:

not ok 688 should be deeply equivalent
    ---
      operator: deepEqual
      expected: |-
        [ { type: 'put', key: 456, value: 99, custom: 123 } ]
      actual: |-
        [ { type: 'put', key: '456', value: '99', custom: 123, keyEncoding: 'utf8', valueEncoding: 'utf8' } ]
      at: RocksLevel.<anonymous> (/Users/ronagy/GitHub/rocksdb/node_modules/abstract-level/test/batch-test.js:299:9)
      stack: |-
        Error: should be deeply equivalent
            at Test.assert [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:314:54)
            at Test.bound [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at Test.tapeDeepEqual (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:555:10)
            at Test.bound [as same] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at RocksLevel.<anonymous> (/Users/ronagy/GitHub/rocksdb/node_modules/abstract-level/test/batch-test.js:299:9)
            at RocksLevel.emit (node:events:527:28)
            at /Users/ronagy/GitHub/rocksdb/node_modules/abstract-level/abstract-chained-batch.js:134:27
            at processTicksAndRejections (node:internal/process/task_queues:82:21)
    ...
  not ok 690 plan != count
    ---
      operator: fail
      expected: 2
      actual:   3
      at: RocksLevel.<anonymous> (/Users/ronagy/GitHub/rocksdb/node_modules/abstract-level/test/batch-test.js:299:9)
      stack: |-
        Error: plan != count
            at Test.assert [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:314:54)
            at Test.bound [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at Test.fail (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:408:10)
            at Test.bound [as fail] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at Test.assert [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:400:14)
            at Test.bound [as _assert] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at Test.tapeDeepEqual (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:555:10)
            at Test.bound [as same] (/Users/ronagy/GitHub/rocksdb/node_modules/tape/lib/test.js:99:32)
            at RocksLevel.<anonymous> (/Users/ronagy/GitHub/rocksdb/node_modules/abstract-level/test/batch-test.js:299:9)
            at RocksLevel.emit (node:events:527:28)
    ...

allow for non-ASCII names when creating an `AbstractSublevel`

would it be possible to add an option when creating an AbstractSublevel (or just remove the requirement altogether) to allow for names with non-ASCII characters?

it seems a bit odd that i could manually create keys for put/get/del/etc. that contain non-ASCII characters, but i cant use sublevel (which is a far nicer experience due to the behavior of keys/iterator/etc.)

for example,

level.get("!foo!bar")

vs

let sublevel = level.sublevel("foo")
sublevel.get("bar")

Canary-test v2

On:

  • memory-level
  • classic-level
  • browser-level (without writing code; just give it a quick look because IndexedDB doesn't support promises)

Events on root propagate pre-encoded value

The events propagated on sub-level instances do not have their value encoded, unlike events triggered by sublevel pushes.
Is this correct behavior?

const db = new Level('db')
await db.open()
db.on('put', (key, value) => {
  // { key: 'hello', value: { world: 'foo' } }
  // { key: '!sub!hello', value: '[object Object]' }
  console.log({ key, value })
})
const sub = db.sublevel('sub')
await db.put('hello', { world: 'foo' })
console.log(await db.get('hello'))
await sub.put('hello', { world: 'foo' })
console.log(await sub.get('hello'))

How to determine the prefix in a sublevel

I am working on a redis level implementation where I need to know which part of the key parameter passed in to _put() is the sublevel prefix and what is the key without the prefix. So for this example:

const db = Mylevel()
const sub1 = db.sublevel('sub1')
const sub2 = sub1.sublevel('sub2')

await sub2.put('foo', 'bar')

So with the default separator, my _put implementation would get a key like !sub1!!sub2!foo. How can my _put implementation know that foo is the key in the sublevel and the prefix is !sub1!!sub2! (and is my reasoning so far correct)?

I can think of two ways of dealing with this. Override AbstractSublevel and pass the separator in where needed and then use that to extract the prefix from the key. Alternatively, again overriding AbstractSublevel, keep track of the name of each sublevel and pass it down as part of the options.

It seems like neither of these is particularly clean so wondering if there is any better way to do this.

Cannot use class-based custom encoder

I'm not sure if this would be a bug as its a new code-base afterall; but its certainly a deviation from the leveldown way, where this worked.

class CustomEncoding {
  encode (arg) { return JSON.stringify(arg) }
  decode (arg) { return JSON.parse(arg) };
  constructor () {
    this.format = 'utf8'
    this.type = 'custom'
  }
}

const customEncoding = new CustomEncoding()

const db = testCommon.factory({
  keyEncoding: customEncoding,
  valueEncoding: customEncoding
})

TypeError: The 'encode' property must be a function
at new Encoding (D:\Code\abstract-level\node_modules\level-transcoder\lib\encoding.js:28:13)
at new UTF8Format (D:\Code\abstract-level\node_modules\level-transcoder\lib\formats.js:75:5)
at from (D:\Code\abstract-level\node_modules\level-transcoder\index.js:111:25)
at Transcoder.encoding (D:\Code\abstract-level\node_modules\level-transcoder\index.js:66:20)
at new AbstractLevel (D:\Code\abstract-level\abstract-level.js:4:432)
at new MinimalLevel (D:\Code\abstract-level\test\util.js:123:5)
at Object.factory (D:\Code\abstract-level\test\self.js:1078:12)
at run (D:\Code\abstract-level\test\encoding-custom-class-test.js:72:27)
at Test. (D:\Code\abstract-level\test\encoding-custom-class-test.js:7:7)

Add option to disable `prewrite` hook

Context

I'm working on a rewrite of level-ttl, not because I need it, but to make it use hooks and find out what gaps we in that API.

Problem

If a plugin like level-ttl uses the prewrite hook but also has a background job that writes to the same db, it will trigger its own hook function (as well as other hook functions). E.g.:

module.exports = function ttl (db) {
  // Imagine a sublevel to which we write expiry metadata
  const sublevel = ...

  db.hooks.prewrite.add(function (op, batch) {
    if (op.type === 'put') {
      const exp = Date.now() + op.ttl
      batch.add({ type: 'put', sublevel, key: op.key, value: exp })
    } else {
      batch.add({ type: 'del', sublevel, key: op.key })
    }
  })

  // Background job
  setInterval(function sweep () {
    // Imagine we're deleting an expired key
    db.del('foo').catch(..)
  }, 60e3)
}

Solution

db.del('foo', { prewrite: false })

As well as:

db.put('foo', 'bar', { prewrite: false })
db.batch([], { prewrite: false })
db.batch().put('foo', 'bar', { prewrite: false })
db.batch().del('foo', { prewrite: false })

Dependency funding for Abstract-level

First off: Sorry for raising an issue! It's been hard to find a contact method

Abstract-Level has received dependency funding from Optimism, an Ethereum scaling solution. As part of Optimism's Retroactive Public Goods Funding program, dependencies of the Optimism Monorepo have been awarded OP tokens. 1889.9993 OP have been awarded to you.

To claim the OP you need to complete a KYC process with the Optimism Foundation.
Please provide an email address to me, so we can forward you the details. Please reply in this issue or send an email to [email protected]

Thank you โค๏ธ

Support a db.has(key) method for testing whether a key exists in the database

For context, I am using https://github.com/Level/classic-level/ but I decided to create this issue here because it should have value for all implementations.

Currently the best available method to test whether a record exists within the database for a particular key is to read the full value of the record using await db.get(key). You can instead iterate over db.keys(), but this is much slower.

It would be nice to have a more lightweight (and potentially even faster) method to assert the existence of a specific key like await db.has(key) which would return a boolean for whether a record exists with that key.

Benchmark against level

Compare:

  • level@7 against classic-level. To test the native part (now using encoding options instead of *asBuffer). Not expecting a change here.
  • level-mem + subleveldown against memory-level + sublevels, with json encoding. To test the JS part. Expecting a slight improvement here, though it might only surface in real-world apps (with GC pressure) rather than a synthetic benchmark.
  • Batching on level-mem against memory-level, as it removes two (or three, with sublevels) iterations of the batch array.

A quick benchmark is enough (of reads and writes). It's just to check that performance is equal or better.

Allow batches to write to a nondescendant sublevel

This example from the v2 README doesn't work:

const charwise = require('charwise-compact')
const books = db.sublevel('books', { valueEncoding: 'json' })
const index = db.sublevel('authors', { keyEncoding: charwise })

books.hooks.prewrite.add(function (op, batch) {
  if (op.type === 'put') {
    batch.add({
      type: 'put',
      key: [op.value.author, op.key],
      value: '',
      sublevel: index
    })
  }
})

// Will atomically commit it to the author index as well
await books.put('12', { title: 'Siddhartha', author: 'Hesse' })

I added a constraint to db.batch() and to the prewrite hook that if a sublevel option is provided, that sublevel must be a descendant of the db. That simplified internals, but the above use case violates the constraint. It's a realistic use case, so I want to remove the constraint (which doesn't exist in v1).

Batch logic should then be, if sublevel is a descendant then prefix the key now, else pass the sublevel option "down" to the private API (aka "up" to the parent database) and skip events. E.g. in the above example, the 'write' event of books should not include the index op.

getMany with partial result?

I believe currently the getMany API expects an all or nothing result. However, I think it might make sense to allow partial results based on e.g. a high water mark or deadline?

Decide on naming of modules

With abstract-level, there is no "down" or "up" anymore. For example, memdown does not have to be wrapped with levelup. This means level-mem could just do module.exports = require('memdown') to export (largely) the same functionality as before. Which begs the question, should we:

  1. Keep memdown (as an abstract-level database); deprecate level-mem (as a levelup db)
  2. Deprecate memdown (as an abstract-leveldown database); move code to level-mem (as an abstract-level db)
  3. Deprecate both and create a new memory-level package (as well as leveldb-level, indexeddb-level, etc)

@Level/core thoughts?

Review TypeScript declaration files

Disclaimer: I'm not committing to maintaining typings here (rather than DT) and it will not be a blocker for release. I just found them useful when implementing level-transcoder and also wanted to learn something new.

Start here. The main export is AbstractLevel<TFormat, KDefault = string, VDefault = string> with three generic type parameters:

  1. TFormat: the type used by the implementation to store data; see level-transcoder for background
  2. KDefault: the type of keys, if not overridden on individual methods
  3. VDefault: the type of values, if not overridden.

Downstream, for example in memory-level (the replacement for memdown), the types will look like this (simplified):

import { AbstractLevel, AbstractDatabaseOptions } from 'abstract-level'

export class MemoryLevel<KDefault = string, VDefault = string>
  extends AbstractLevel<Buffer | Uint8Array | string, KDefault, VDefault> {
  constructor (
    options?: AbstractDatabaseOptions<KDefault, VDefault> | undefined
  )
}

For a consumer of such an implementation, usage in TypeScript then looks like this:

// Specify types of keys and values (any, in the case of json).
// The generic type parameters default to level<string, string>.
const db = new MemoryLevel<string, any>({ valueEncoding: 'json' })

// All relevant methods then use those types
await db.put('a', { x: 123 })

// Specify different types when overriding encoding per operation
await db.get<string, string>('a', { valueEncoding: 'utf8' })

// Though in some cases TypeScript can infer them
await db.get('a', { valueEncoding: db.valueEncoding('utf8') })

Which I believe to be a fair balance between DX and maintainable (simple) typings. I will push new code and typings to memory-level later; might make it easier to review if you have an actual database to play around with.

Benchmarks?

Would be nice if abstract-level could also include benchmarks so we can easily compare our different backend implementations.

range queries inclusive/exclusive

Currently the range options have some overlap and I'm wondering if we could remove the requirement for backends to implement both? e.g. rocksdb (and probably others) have support for inclusive lower bound (gte) and exclusive upper bound (lt).

I think we could e.g. emulate lte as lt + '\u0000' and gt as gt + '\uFFFF'. I'm not 100% sure. Thougths?

flush() instead of sync=true

I believe an API with a flush() and flushSync() method would be more efficient than having a sync parameter to each call and probably provide the same functionality.

Expose name and/or local prefix of sublevel

Context

I'm working on a rewrite of level-ttl, to make it use hooks and find out what gaps we in that API. Say a plugin like that wants to add methods to a db and also to (deeply) nested sublevels that the user might create. For optimal performance, the plugin wishes to bypass intermediate sublevels on its own operations. E.g.:

module.exports = function ttlPlugin (db) {
  // Skipping encodings etc for brevity
  const main = db.sublevel('main')
  const expiryIndex = db.sublevel('expiryIndex')

  db.hooks.prewrite.add(function (op, batch) {
    const expiry = Date.now() + op.ttl
    batch.add({ type: 'put', sublevel: expiryIndex, key: op.key, value: expiry })
  })

  const addMethods = (sub, path) => {
    // Bypass intermediate sublevels
    const proxy = db.sublevel(['expiryIndex', ...path])

    sub.getTTL = async (key) => {
      const expiry = await proxy.get(key)
      return expiry - Date.now()
    }

    // Offer the same API on nested sublevels
    sub.hooks.newsub.add((child) => {
      // Here we need the name or prefix of the child sublevel
      addMethods(child, [...path, child.xxx])
    })
  }

  addMethods(main, ['main'])
  return main
}

Usage:

const db = new Level()
const cache = ttlPlugin(db)
const html = cache.sublevel(['templates', 'html'])
await html.put('my-template', '<html></html>', { ttl: 100 })
const ttl = await html.getTTL('my-template') // 100 (or less)

Problem

There's no suitable value for child.xxx atm. We don't expose the name(s) of a sublevel (e.g. 'templates' and 'html') and even if we did, the sublevel might be using custom separators. We do expose the prefix of a sublevel (e.g. !templates!!html!). But passing this along to .sublevel() would only work one level deep, because the AbstractSublevel constructor trims separators from e.g. .sublevel('!templates!) resulting in a valid name, but .sublevel('!templates!!html!') results in a name that's 'templates!!html'. Which is then rejected by:

if (!names.every(name => textEncoder.encode(name).every(x => x > reserved && x < 127))) {
throw new ModuleError(`Sublevel name must use bytes > ${reserved} < ${127}`, {
code: 'LEVEL_INVALID_PREFIX'
})

Decide whether deferred open should (still) be patch-safe

// NOTE: adapted from levelup
// It should be safe to monkey-patch database methods.
// TODO: decide whether to fix this (see actual result below)
test.skip('deferred open is patch-safe', async function (t) {
const db = testCommon.factory()
t.is(db.status, 'opening')
const calls = []
for (const method of ['put', 'del', 'batch']) {
const original = db[method]
db[method] = function () {
calls.push(method)
return original.apply(this, arguments)
}
}
await Promise.all([
db.put('a', 'value'),
db.del('b'),
db.batch([{ type: 'del', key: 'c' }])
])
t.is(db.status, 'open')
// Actual: ['put', 'del', 'batch', 'put', 'del', 'batch']
t.same(calls, ['put', 'del', 'batch'])
return db.close()
})

chainedBatch._put/_del

As an implementer. How should I handle errors on _put/_del in the underlying batch implementation? Should I just throw or save the error and wait until the _write call and propagate though the callback?

Ditch EventEmitter

Hello, I imported package browser-level
and received the following error

โœ˜ [ERROR] Could not resolve "events"

    node_modules/abstract-level/abstract-level.js:5:33:
      5 โ”‚ const { EventEmitter } = require('events')
        โ•ต                                  ~~~~~~~~

  The package "events" wasn't found on the file system but is built into node. Are you trying to bundle for node? You can use "--platform=node" to do that, which will remove this error.

I feel having dependencies for node primitives in a meta-package such as abstract-level is overkill.

Would it be possible to move away from EventEmitters in favour for async generators or at worst a single function subscription pattern?

const unsubscribe = subscribe(event => ...)

I still dream nightmares about maintaining references to unremoved listener functions.

Wrong Type declarations for .all() and .nextv() methods AbstractKeyIterator and AbstractValueIterator

According to the typescript declarations the Promise overloads of .all() and .nextv() of AbstractKeyIterator return Promise<[K]>. However they should return a promise to an array of type K (so Promise<K[]> or Promise<Array<K>>). Similarly the Callback versions are declared to take NodeCallback<[K]> where it should be NodeCallback<Array<K>>.
The same situation exists for AbstractValueIterator, only with [V] where it should be Array<V> instead of [K].

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.