Coder Social home page Coder Social logo

levelup's Introduction

levelup

๐Ÿ“Œ This module will soon be deprecated, because it is superseded by abstract-level.

level badge npm Node version Test Coverage Standard Common Changelog Donate

Table of Contents

Click to expand

Introduction

Fast and simple storage. A Node.js wrapper for abstract-leveldown compliant stores, which follow the characteristics of LevelDB.

LevelDB is a simple key-value store built by Google. It's used in Google Chrome and many other products. LevelDB supports arbitrary byte arrays as both keys and values, singular get, put and delete operations, batched put and delete, bi-directional iterators and simple compression using the very fast Snappy algorithm.

LevelDB stores entries sorted lexicographically by keys. This makes the streaming interface of levelup - which exposes LevelDB iterators as Readable Streams - a very powerful query mechanism.

The most common store is leveldown which provides a pure C++ binding to LevelDB. Many alternative stores are available such as level.js in the browser or memdown for an in-memory store. They typically support strings and Buffers for both keys and values. For a richer set of data types you can wrap the store with encoding-down.

The level package is the recommended way to get started. It conveniently bundles levelup, leveldown and encoding-down. Its main export is levelup - i.e. you can do var db = require('level').

Supported Platforms

We aim to support Active LTS and Current Node.js releases as well as browsers. For support of the underlying store, please see the respective documentation.

Sauce Test Status

Usage

If you are upgrading: please see UPGRADING.md.

First you need to install levelup! No stores are included so you must also install leveldown (for example).

$ npm install levelup leveldown

All operations are asynchronous. If you do not provide a callback, a Promise is returned.

var levelup = require('levelup')
var leveldown = require('leveldown')

// 1) Create our store
var db = levelup(leveldown('./mydb'))

// 2) Put a key & value
db.put('name', 'levelup', function (err) {
  if (err) return console.log('Ooops!', err) // some kind of I/O error

  // 3) Fetch by key
  db.get('name', function (err, value) {
    if (err) return console.log('Ooops!', err) // likely the key was not found

    // Ta da!
    console.log('name=' + value)
  })
})

API

levelup(db[, options[, callback]])

The main entry point for creating a new levelup instance.

  • db must be an abstract-leveldown compliant store.
  • options is passed on to the underlying store when opened and is specific to the type of store being used

Calling levelup(db) will also open the underlying store. This is an asynchronous operation which will trigger your callback if you provide one. The callback should take the form function (err, db) {} where db is the levelup instance. If you don't provide a callback, any read & write operations are simply queued internally until the store is fully opened, unless it fails to open, in which case an error event will be emitted.

This leads to two alternative ways of managing a levelup instance:

levelup(leveldown(location), options, function (err, db) {
  if (err) throw err

  db.get('foo', function (err, value) {
    if (err) return console.log('foo does not exist')
    console.log('got foo =', value)
  })
})

Versus the equivalent:

// Will throw if an error occurs
var db = levelup(leveldown(location), options)

db.get('foo', function (err, value) {
  if (err) return console.log('foo does not exist')
  console.log('got foo =', value)
})

db.supports

A read-only manifest. Might be used like so:

if (!db.supports.permanence) {
  throw new Error('Persistent storage is required')
}

if (db.supports.bufferKeys && db.supports.promises) {
  await db.put(Buffer.from('key'), 'value')
}

db.open([options][, callback])

Opens the underlying store. In general you shouldn't need to call this method directly as it's automatically called by levelup(). However, it is possible to reopen the store after it has been closed with close().

If no callback is passed, a promise is returned.

db.close([callback])

close() closes the underlying store. The callback will receive any error encountered during closing as the first argument.

You should always clean up your levelup instance by calling close() when you no longer need it to free up resources. A store cannot be opened by multiple instances of levelup simultaneously.

If no callback is passed, a promise is returned.

db.put(key, value[, options][, callback])

put() is the primary method for inserting data into the store. Both key and value can be of any type as far as levelup is concerned.

options is passed on to the underlying store.

If no callback is passed, a promise is returned.

db.get(key[, options][, callback])

Get a value from the store by key. The key can be of any type. If it doesn't exist in the store then the callback or promise will receive an error. A not-found err object will be of type 'NotFoundError' so you can err.type == 'NotFoundError' or you can perform a truthy test on the property err.notFound.

db.get('foo', function (err, value) {
  if (err) {
    if (err.notFound) {
      // handle a 'NotFoundError' here
      return
    }
    // I/O or other error, pass it up the callback chain
    return callback(err)
  }

  // .. handle `value` here
})

The optional options object is passed on to the underlying store.

If no callback is passed, a promise is returned.

db.getMany(keys[, options][, callback])

Get multiple values from the store by an array of keys. The optional options object is passed on to the underlying store.

The callback function will be called with an Error if the operation failed for any reason. If successful the first argument will be null and the second argument will be an array of values with the same order as keys. If a key was not found, the relevant value will be undefined.

If no callback is provided, a promise is returned.

db.del(key[, options][, callback])

del() is the primary method for removing data from the store.

db.del('foo', function (err) {
  if (err)
    // handle I/O or other error
});

options is passed on to the underlying store.

If no callback is passed, a promise is returned.

db.batch(array[, options][, callback]) (array form)

batch() can be used for very fast bulk-write operations (both put and delete). The array argument should contain a list of operations to be executed sequentially, although as a whole they are performed as an atomic operation inside the underlying store.

Each operation is contained in an object having the following properties: type, key, value, where the type is either 'put' or 'del'. In the case of 'del' the value property is ignored. Any entries with a key of null or undefined will cause an error to be returned on the callback and any type: 'put' entry with a value of null or undefined will return an error.

const ops = [
  { type: 'del', key: 'father' },
  { type: 'put', key: 'name', value: 'Yuri Irsenovich Kim' },
  { type: 'put', key: 'dob', value: '16 February 1941' },
  { type: 'put', key: 'spouse', value: 'Kim Young-sook' },
  { type: 'put', key: 'occupation', value: 'Clown' }
]

db.batch(ops, function (err) {
  if (err) return console.log('Ooops!', err)
  console.log('Great success dear leader!')
})

options is passed on to the underlying store.

If no callback is passed, a promise is returned.

db.batch() (chained form)

batch(), when called with no arguments will return a Batch object which can be used to build, and eventually commit, an atomic batch operation. Depending on how it's used, it is possible to obtain greater performance when using the chained form of batch() over the array form.

db.batch()
  .del('father')
  .put('name', 'Yuri Irsenovich Kim')
  .put('dob', '16 February 1941')
  .put('spouse', 'Kim Young-sook')
  .put('occupation', 'Clown')
  .write(function () { console.log('Done!') })

batch.put(key, value[, options])

Queue a put operation on the current batch, not committed until a write() is called on the batch. The options argument, if provided, must be an object and is passed on to the underlying store.

This method may throw a WriteError if there is a problem with your put (such as the value being null or undefined).

batch.del(key[, options])

Queue a del operation on the current batch, not committed until a write() is called on the batch. The options argument, if provided, must be an object and is passed on to the underlying store.

This method may throw a WriteError if there is a problem with your delete.

batch.clear()

Clear all queued operations on the current batch, any previous operations will be discarded.

batch.length

The number of queued operations on the current batch.

batch.write([options][, callback])

Commit the queued operations for this batch. All operations not cleared will be written to the underlying store atomically, that is, they will either all succeed or fail with no partial commits.

The optional options object is passed to the .write() operation of the underlying batch object.

If no callback is passed, a promise is returned.

db.status

A readonly string that is one of:

  • new - newly created, not opened or closed
  • opening - waiting for the underlying store to be opened
  • open - successfully opened the store, available for use
  • closing - waiting for the store to be closed
  • closed - store has been successfully closed.

db.isOperational()

Returns true if the store accepts operations, which in the case of levelup means that status is either opening or open, because it opens itself and queues up operations until opened.

db.createReadStream([options])

Returns a Readable Stream of key-value pairs. A pair is an object with key and value properties. By default it will stream all entries in the underlying store from start to end. Use the options described below to control the range, direction and results.

db.createReadStream()
  .on('data', function (data) {
    console.log(data.key, '=', data.value)
  })
  .on('error', function (err) {
    console.log('Oh my!', err)
  })
  .on('close', function () {
    console.log('Stream closed')
  })
  .on('end', function () {
    console.log('Stream ended')
  })

You can supply an options object as the first parameter to createReadStream() with the following properties:

  • gt (greater than), gte (greater than or equal) define the lower bound of the range to be streamed. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries streamed will be the same.

  • lt (less than), lte (less than or equal) define the higher bound of the range to be streamed. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries streamed will be the same.

  • reverse (boolean, default: false): stream entries in reverse order. Beware that due to the way that stores like LevelDB work, a reverse seek can be slower than a forward seek.

  • limit (number, default: -1): limit the number of entries collected by this stream. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1 means there is no limit. When reverse=true the entries with the highest keys will be returned instead of the lowest keys.

  • keys (boolean, default: true): whether the results should contain keys. If set to true and values set to false then results will simply be keys, rather than objects with a key property. Used internally by the createKeyStream() method.

  • values (boolean, default: true): whether the results should contain values. If set to true and keys set to false then results will simply be values, rather than objects with a value property. Used internally by the createValueStream() method.

db.createKeyStream([options])

Returns a Readable Stream of keys rather than key-value pairs. Use the same options as described for createReadStream() to control the range and direction.

You can also obtain this stream by passing an options object to createReadStream() with keys set to true and values set to false. The result is equivalent; both streams operate in object mode.

db.createKeyStream()
  .on('data', function (data) {
    console.log('key=', data)
  })

// same as:
db.createReadStream({ keys: true, values: false })
  .on('data', function (data) {
    console.log('key=', data)
  })

db.createValueStream([options])

Returns a Readable Stream of values rather than key-value pairs. Use the same options as described for createReadStream() to control the range and direction.

You can also obtain this stream by passing an options object to createReadStream() with values set to true and keys set to false. The result is equivalent; both streams operate in object mode.

db.createValueStream()
  .on('data', function (data) {
    console.log('value=', data)
  })

// same as:
db.createReadStream({ keys: false, values: true })
  .on('data', function (data) {
    console.log('value=', data)
  })

db.iterator([options])

Returns an abstract-leveldown iterator, which is what powers the readable streams above. Options are the same as the range options of createReadStream() and are passed to the underlying store.

These iterators support for await...of:

for await (const [key, value] of db.iterator()) {
  console.log(value)
}

db.clear([options][, callback])

Delete all entries or a range. Not guaranteed to be atomic. Accepts the following range options (with the same rules as on iterators):

  • gt (greater than), gte (greater than or equal) define the lower bound of the range to be deleted. Only entries where the key is greater than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries deleted will be the same.
  • lt (less than), lte (less than or equal) define the higher bound of the range to be deleted. Only entries where the key is less than (or equal to) this option will be included in the range. When reverse=true the order will be reversed, but the entries deleted will be the same.
  • reverse (boolean, default: false): delete entries in reverse order. Only effective in combination with limit, to remove the last N records.
  • limit (number, default: -1): limit the number of entries to be deleted. This number represents a maximum number of entries and may not be reached if you get to the end of the range first. A value of -1 means there is no limit. When reverse=true the entries with the highest keys will be deleted instead of the lowest keys.

If no options are provided, all entries will be deleted. The callback function will be called with no arguments if the operation was successful or with an WriteError if it failed for any reason.

If no callback is passed, a promise is returned.

What happened to db.createWriteStream?

db.createWriteStream() has been removed in order to provide a smaller and more maintainable core. It primarily existed to create symmetry with db.createReadStream() but through much discussion, removing it was the best course of action.

The main driver for this was performance. While db.createReadStream() performs well under most use cases, db.createWriteStream() was highly dependent on the application keys and values. Thus we can't provide a standard implementation and encourage more write-stream implementations to be created to solve the broad spectrum of use cases.

Check out the implementations that the community has produced here.

Promise Support

Each function accepting a callback returns a promise if the callback is omitted. The only exception is the levelup constructor itself, which if no callback is passed will lazily open the underlying store in the background.

Example:

const db = levelup(leveldown('./my-db'))
await db.put('foo', 'bar')
console.log(await db.get('foo'))

Events

levelup is an EventEmitter and emits the following events.

Event Description Arguments
put Key has been updated key, value (any)
del Key has been deleted key (any)
batch Batch has executed operations (array)
clear Entries were deleted options (object)
opening Underlying store is opening -
open Store has opened -
ready Alias of open -
closing Store is closing -
closed Store has closed. -
error An error occurred error (Error)

For example you can do:

db.on('put', function (key, value) {
  console.log('inserted', { key, value })
})

Multi-process Access

Stores like LevelDB are thread-safe but they are not suitable for accessing with multiple processes. You should only ever have a store open from a single Node.js process. Node.js clusters are made up of multiple processes so a levelup instance cannot be shared between them either.

See Level/awesome for modules like multileveldown that may help if you require a single store to be shared across processes.

Contributing

Level/levelup is an OPEN Open Source Project. This means that:

Individuals making significant and valuable contributions are given commit-access to the project to contribute as they see fit. This project is more like an open wiki than a standard guarded open source project.

See the Contribution Guide for more details.

Big Thanks

Cross-browser Testing Platform and Open Source โ™ฅ Provided by Sauce Labs.

Sauce Labs logo

Donate

Support us with a monthly donation on Open Collective and help us continue our work.

License

MIT

levelup's People

Contributors

achingbrain avatar adityapurwa avatar bewest avatar dependabot-preview[bot] avatar dependabot[bot] avatar dominictarr avatar farskipper avatar greenkeeper[bot] avatar heapwolf avatar huan avatar jcrugzz avatar juliangruber avatar kemitchell avatar kesla avatar mcavage avatar mcollina avatar meirionhughes avatar morolt avatar nolanlawson avatar pascaltemel avatar pgte avatar prayagverma avatar raboof avatar ralphtheninja avatar raynos avatar richardlitt avatar rvagg avatar sandfox avatar sorribas avatar vweevers avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

levelup's Issues

hooks

Hooks on put and del that allow you to turn put and del into a batch.

If your usage of levelup is complicated you may want to intercept del and put and also store a marker in the database saying "some kind of job needs to run"

@dominictarr needs this for map-reduce.

The main reason for this is that you cant use before:put or after:put since if the process crashes then either you have the data and no marker or the marker and no data in the database.

You want to convert the put or del into an atomic batch operation.

API assessment

I wouldn't mind thoughts from other people on the API and if it can be improved. The goal of LevelUP is not to expose the exact LevelDB API but to expose something that's Node.js friendly, whether that's the LevelDB API or something else.

All operations must be async, I don't want any sync I/O at all, or even options for it. But this creates a bit of possible awkwardness around create/open:

var db = require('levelup').createDatabase('./mydb', { /* opts */ })
db.open(function (err) {
  // start doing stuff on `db`
})

The alternative might be something like this:

require('levelup').open('./mydb', { /* opts */ }, function (db, err) {
  // start doing stuff on `db
})

But I don't particularly not being able to have a reference to the database until it's opened, that feels a little too leaky.

Thoughts from outside my head would be appreciated!

/cc @maxogden

readStream: certain values of start/end cause an error

Specifically 0 causes this error. Not sure what other values do.

var lu = require('levelup')
lu('test', {createIfMissing:true}, function(err, db) {
    var s = db.readStream({start: 0}).on('data', console.log);
});

node crashes with this error:

terminate called throwing an exceptionAbort trap: 6

Using {start: '0'} gives expected results.

ApproximateSize scoping problems

The following code should give a helpful error about missing a name like this

events.js:68
        throw arguments[1]; // Unhandled 'error' event
                       ^
OpenError: #name cannot be `null` or `undefined`

Code

var levelup = require('levelup');
levelup('./testdb', function(err, db){
        db.approximateSize(function(err, size){
                if(err)
                        console.log(err);
                console.log(size);
        });
});

However we get this instead

/Users/sandfox/code/testme/node_modules/levelup/lib/levelup.js:323
      this.emit('error', err)
           ^
TypeError: Object #<Object> has no method 'emit'
    at LevelUP.approximateSize (/Users/sandfox/code/testme/node_modules/levelup/lib/levelup.js:323:12)
    at LevelUP.approximateSize (/Users/sandfox/code/testme/node_modules/levelup/lib/levelup.js:318:12)
    at /Users/sandfox/code/testme/test-error.js:5:5
    at LevelUP.open (/Users/sandfox/code/testme/node_modules/levelup/lib/levelup.js:86:11)

I've narrowed it down to scope issue and have hacked up a fix. My javascript skills aren't that great so feel free to bat down or suggest improvements.
Also I haven't made any proper test cases apart from the above

live / tailed range queries

Imagine being able to do a range query

var stream = db.query({ start: start, end: end })

// get a never ending stream of all data in between that range

What this means is it will listen on put events and check whether its in the range and emit that on the stream.

You lose any garantuees of the result of the query being in order but it is a live / tailable query.

remove bufferstream as a dependency

This one is just a personal annoyance, not a great priority. Bufferstream is used solely for fstream compatibility, which is not likely to be a much-used feature. In turn, bufferstream in turn requires buffer-tools which requires compiling. All it's used for is to turn an existing, immutable, Buffer into a Stream, nothing fancy! There's native stuff to worry about without pulling in an additional compilable module so I'd prefer it if we could remove this dependency.

0.5 release

See complete diff from 0.4.4: 0.4.4...master

There's a persistent C++ problem that keeps on showing up but I haven't been able to make it occur since my last round of tweaks but I'm guessing that's just coincidental. A bit of work has backed up because I wanted to try and track it down before a release but I've yet to find anything!

I've published this current one to npm with the dev tag so you can install it as npm install levelup@dev and also with npm install [email protected] (I put out a dev of 0.5.0 yesterday, I should have made it 0.5.0-a or something but now an actual stable 0.5 will probably have to be 0.5.1).

Let me know if you have any reason to object to an 0.5 release, otherwise I'll publish it tomorrow.

Compression not working?

I tried using both put and the WriteStream, but looking at the data (*.sst files) with less it doesn't appear to be compressed. It's a binary, because of how keys and values are separated, but that's all. I tried both bigger HTML files and custom text.

undefined symbol: _ZTV7BatchOp

npm install levelup
node
> require('levelup')

Error: /home/dominic/c/experiments/node_modules/levelup/build/Debug/levelup.node: undefined symbol: _ZTV7BatchOp
    at Object.Module._extensions..node (module.js:485:11)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.require (module.js:362:17)
    at require (module.js:378:17)
    at bindings (/home/dominic/c/experiments/node_modules/levelup/node_modules/bindings/bindings.js:74:15)
    at Object.<anonymous> (/home/dominic/c/experiments/node_modules/levelup/lib/levelup.js:3:39)
    at Module._compile (module.js:449:26)
    at Object.Module._extensions..js (module.js:467:10)
    at Module.load (module.js:356:32)

maxListeners

Your eventemitter seems to have the standard limit of 10 listeners...

default options

Current defaults are: { createIfMissing : false, errorIfExists : false } but I'm always changing this to { createIfMissing : true, errorIfExists : false }.

Any objections to changing to this for 0.5.0?

reduce api surface

Candidates to go

  • encoding: "json"
  • db.keyStream()
  • db.valueStream()
  • db.writeStream()

Hopefully writeStream() can just be a module.

Other possibilities:

  • db.readStream() to be replaced with a very lightweight cursor implementation
  • all encoding logic
  • db.isOpen() and db.isClosed() if you care read db.state
  • operation specific encoding. What is the use-case for this?

shorthand for batch ops

It'd be nice if the api supported shorthand. ๐Ÿ™

{ 'put': 'key', value: output }

I could do a pull request for this if you like the idea.

writeStream() should accept type

stream.write({ type: "del", key: "foo" })

should work.

Also maybe stream.write([{ type: "del", key: "foo" }, { type: "put", key: "foo2", value: "hello!" }])

readStream options are sticky

When I get a readStream({reverse: true}), followed by a plain readStream(), I get the documents out in reverse order again, unless I specify reverse: false. start and end behave similarly, the last value for each option acting as the default value for the next call.

Example:

> db.readStream().on('data', console.log)
{ key: '0', value: "blah" }
{ key: '1', value: "foo" }
{ key: '2', value: "bar" }
{ key: '3', value: "..." }

# ok, reverse the stream
> db.readStream({reverse: true}).on('data', console.log)
{ key: '3', value: "..." }
{ key: '2', value: "bar" }
{ key: '1', value: "foo" }
{ key: '0', value: "blah" }

# stream is still reversed
> db.readStream().on('data', console.log)
{ key: '3', value: "..." }
{ key: '2', value: "bar" }
{ key: '1', value: "foo" }
{ key: '0', value: "blah" }

# back to normal with reverse: false
> db.readStream({reverse: false}).on('data', console.log)
{ key: '0', value: "blah" }
{ key: '1', value: "foo" }
{ key: '2', value: "bar" }
{ key: '3', value: "..." }

Support cache option

options.cache = leveldb::NewLRUCache(100 * 1048576);  // 100MB cache

Will turn on an uncompressed cache of frequently used blocks.

options.fill_cache = false;

During a read operation will prevent the read items from displacing items in the cache, it's probably worth turning this on by default for readStream()'s without start & end options but also as an option on any read operation, stream and get()

support arbitary encoding

Let's say I want encoding like

var db = levelup("uri", {
    toEncoding: function (raw) {
        return new Type(JSON.parse(raw))
    }
    , toBuffer: function (type) {
        delete type._id
        return JSON.stringify(type)
    }
})

function Type(data) { ... }

I basically want to wrap all objects coming out of the db in a certain encoding and clean up all objects I put into the db with a certain cleanup logic.

put, get, del should only callback error or nothing.

currently, put and del callback the keys they where called with,
this is problematic for hooking into these, because a put could become a batch,
(and batch only calls back cb(err) or cb())

I can't see much use it calling back the args, especially since the callback is likely to already have access to them via closure scope.

I using the callback signature of batch for the put and get.

The gain is to make hooks simpler.

your thoughts everyone?

opts.silent

levelup prints errors on the console, like:

NotFoundError: Key not found in database [foo]
    at /Users/julian/git/multilevel/node_modules/levelup/lib/levelup.js:160:15

We should either remove that or provide opts.silent to turn it off.

A quick look around the source didn't tell me where the error is outputted.

More events

Emit a before put and after put event (same for del and batch)

Use case is to have an UI side effect when someone puts data into the database without waiting for file IO. It reduces observed latency in your application and makes it snappy

levelup fails to compile on OSX 10.8

Trying to install levelup via npm (actually was trying to install pouchdb but this dependency always fails).

NPM log https://gist.github.com/4132175

This is on - OSX 10.8.2, node v0.8.14

(as an FYI, I downloaded it via git and 'npm install' in the directory - compiles fine so looks like a broken version on NPM rather than the repo)

document plugins and abstractions on wiki

It would be nice to document a list of abstractions people have already build.

For example I'm looking for the simplest way to have multiple indexes for levelup.

But I don't think it's been written yet.

It would also be nice to have a place where we document high level patterns or have a general community wishlist for things that would be cool.

memory leak in readStream

Hi!

I've experienced increasing memory usage when using the readStream extensively, to the point where I suspect that levelup have a memory leak.

Running the code below gives me eventually a FATAL ERROR: JS Allocation failed - process out of memory.

var levelup = require('levelup');

var db = levelup('levelup-leak-db');

db.put('foo', 'bar');

function read() {
    db.readStream().once('end', read);
}
read();

setInterval(function() {
    console.log(process.memoryUsage());
}, 1000);

calling hooks twice on opening database

I want to make a small patch that is causing problems when using level-hooks as the database is opening.

The trouble is that each operation (get, put, del, batch) defers if the database is currently opening. However, if I have patched that operation, it will get called twice.

https://github.com/rvagg/node-levelup/blob/master/lib/levelup.js#L158

Instead, if the database is opening, it should not call the public method again, instead it should defer the call to the leveldown binding

https://github.com/rvagg/node-levelup/blob/master/lib/levelup.js#L174

will put in a pull request shortly!

userland vs core

Things like:

  • query
  • delete range

can all be modules that are a single function and take a levelup db instance and do some stuff.

@rvagg would you prefer a smaller core with an emphasis on userland modules that do more complicated things with levelup or would you prefer a larger core?

Unable to install on Mac OS X version 10.7.5

npm http GET https://registry.npmjs.org/levelup
npm http 200 https://registry.npmjs.org/levelup
npm http GET https://registry.npmjs.org/levelup/-/levelup-0.5.1.tgz
npm http 200 https://registry.npmjs.org/levelup/-/levelup-0.5.1.tgz
npm http GET https://registry.npmjs.org/bufferstream/0.5.1
npm http GET https://registry.npmjs.org/errno/0.0.3
npm http GET https://registry.npmjs.org/concat-stream/0.0.9
npm http GET https://registry.npmjs.org/bindings/1.0.0
npm http 200 https://registry.npmjs.org/concat-stream/0.0.9
npm http GET https://registry.npmjs.org/concat-stream/-/concat-stream-0.0.9.tgz
npm http 200 https://registry.npmjs.org/bufferstream/0.5.1
npm http GET https://registry.npmjs.org/bufferstream/-/bufferstream-0.5.1.tgz
npm http 200 https://registry.npmjs.org/bindings/1.0.0
npm http GET https://registry.npmjs.org/bindings/-/bindings-1.0.0.tgz
npm http 200 https://registry.npmjs.org/errno/0.0.3
npm http GET https://registry.npmjs.org/errno/-/errno-0.0.3.tgz
npm http 200 https://registry.npmjs.org/bufferstream/-/bufferstream-0.5.1.tgz
npm http 200 https://registry.npmjs.org/bindings/-/bindings-1.0.0.tgz
npm http 200 https://registry.npmjs.org/errno/-/errno-0.0.3.tgz
npm http 200 https://registry.npmjs.org/concat-stream/-/concat-stream-0.0.9.tgz
npm http GET https://registry.npmjs.org/buffertools/1.1.0
npm http 200 https://registry.npmjs.org/buffertools/1.1.0
npm http GET https://registry.npmjs.org/buffertools/-/buffertools-1.1.0.tgz
npm http 200 https://registry.npmjs.org/buffertools/-/buffertools-1.1.0.tgz

[email protected] install /Users/local/ZOHOCORP/koteswara-0347/dbtest/node_modules/levelup/node_modules/bufferstream/node_modules/buffertools
node-gyp rebuild

gyp http GET http://nodejs.org/dist/v0.8.16/node-v0.8.16.tar.gz
gyp WARN install got an error, rolling back install
gyp ERR! configure error
gyp ERR! stack Error: connect ETIMEDOUT
gyp ERR! stack at errnoException (net.js:770:11)
gyp ERR! stack at Object.afterConnect as oncomplete
gyp ERR! System Darwin 11.4.2
gyp ERR! command "node" "/usr/local/bin/node-gyp" "rebuild"
gyp ERR! cwd /Users/local/ZOHOCORP/koteswara-0347/dbtest/node_modules/levelup/node_modules/bufferstream/node_modules/buffertools
gyp ERR! node -v v0.8.16
gyp ERR! node-gyp -v v0.6.1
gyp ERR! not ok
npm ERR! [email protected] install: node-gyp rebuild
npm ERR! sh "-c" "node-gyp rebuild" failed with 1
npm ERR!
npm ERR! Failed at the [email protected] install script.
npm ERR! This is most likely a problem with the buffertools package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR! node-gyp rebuild
npm ERR! You can get their info via:
npm ERR! npm owner ls buffertools
npm ERR! There is likely additional logging output above.

npm ERR! System Darwin 11.4.2
npm ERR! command "node" "/usr/local/bin/npm" "install" "levelup" "--save"
npm ERR! cwd /Users/local/ZOHOCORP/koteswara-0347/dbtest
npm ERR! node -v v0.8.16
npm ERR! npm -v 1.1.69
npm ERR! code ELIFECYCLE
npm ERR!
npm ERR! Additional logging details can be found in:
npm ERR! /Users/local/ZOHOCORP/koteswara-0347/dbtest/npm-debug.log
npm ERR! not ok code 0

Support no-compression option

options.compression = leveldb::kNoCompression;

worth also documenting that this is not recommended as Snappy is very snappy and it's not run over uncompressible data anyway.

Upgrade to LevelDB 1.7.0

Before an upgrade, we need to build a functional test across an actual database created with 1.5.0 (compressed and uncompressed would be nice), just to make sure that LevelDB upgrades don't impact on the usability of databases created with older versions.

Events for changes

Emit events for changes: put, batch & delete; a building block for replication.
Could probably do with open and close events too.

Put and batch emitter leak (probably applies to other methods as well)

put causes an emitter leak. You could say, just use batch (but it has the same problem). In many programs, put or batch will be used over time and intermittently, most likely exceeding max emitters.

var levelup = require('levelup');
var level = levelup('./test', { createIfMissing: true, errorIfExists: false });

for (var i = 0; i < 11; i++) {

  var key = 'i' + Math.random()*99;

  level.put(key, key, function (err) {
    console.log(key);
  });
}
(node) warning: possible EventEmitter memory leak detected. 11 listeners added. Use emitter.setMaxListeners() to increase limit.
Trace
    at EventEmitter.addListener (events.js:178:15)
    at EventEmitter.once (events.js:199:8)
    at EventEmitter.LevelUP.put (/Users/paolo/workroot/git/wayla/Wayla-Redux/node_modules/levelup/lib/levelup.js:155:21)
    at Object.<anonymous> (/Users/paolo/workroot/git/wayla/Wayla-Redux/server/eeleak.js:9:9)
    at Module._compile (module.js:454:26)
    at Object.Module._extensions..js (module.js:472:10)
    at Module.load (module.js:356:32)
    at Function.Module._load (module.js:312:12)
    at Module.runMain (module.js:497:10)
    at process._tickCallback (node.js:325:13)
for (var i = 0; i < 11; i++) {

  var key = 'i' + Math.random()*99;

  level.batch([{ type: 'put', key: key, value: key }], function (err) {
    console.log(key);
  });
}

Build fail on Ubuntu

Node version 0.8.14

No problem installing on OSX, but when I tried to put it on the server I got the following error:

gyp ERR! configure error
gyp ERR! stack Error: gyp failed with exit code: 1
gyp ERR! stack at ChildProcess.onCpExit (/usr/local/lib/node_modules/npm/node_modules/node-gyp/lib/configure.js:350:16)
gyp ERR! stack at ChildProcess.EventEmitter.emit (events.js:99:17)
gyp ERR! stack at Process._handle.onexit (child_process.js:678:10)
gyp ERR! System Linux 2.6.39.1-x86_64-linode19
gyp ERR! command "node" "/usr/local/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild"
gyp ERR! cwd /root/wotf/node_modules/levelup
gyp ERR! node -v v0.8.14
gyp ERR! node-gyp -v v0.7.1
gyp ERR! not ok
npm ERR! [email protected] install: node-gyp rebuild
npm ERR! sh "-c" "node-gyp rebuild" failed with 1
npm ERR!
npm ERR! Failed at the [email protected] install script.

Support checksum options

  • ReadOptions::verify_checksums for read operations, which, by default, don't verify checksums.
  • Options::paranoid_checks when opening a db will do aggressive checksum verification

subDbs & plugins

Thinking about ways to create subdbs and cleanly partition dbs.

So, the idea is that you could create subdbs with namespaces like this:

var hiDb = db.createSub('hello')
//and then
hiDb.get('hello', function (){...})
//etc

To be really useful, I want this to work with hooks.
here is what I think it will look like:

say, I want to increment a sequence number for each thing key that is changed.

seqDb = db.createSub('sequence')
hiDb.pre(function (ch, add) {
  add({key: timestamp(), ch.key, type: 'put'}, seqDb)
})

here, adding seqDb to the add call tells add to add it with the prefix for that db.

note: this is a breaking change for the hooks api.

Support filter policies

options.filter_policy = NewBloomFilterPolicy(10);

Should be helpful for performance but it'd be nice to have some benchmarks in place so it can be demonstrated. From LevelDB:

We recommend that applications whose working set does not fit in memory and that do a lot of random reads set a filter policy.

leveldb editor

It would be cool to have a browser editor for leveldb,
It could also run on a indexeddb shim, so you could make a hello world site,
where you test all the stuff without installing anything!

It could also use events, and live streams to keep update stuff in realtime.

discussion, please!

delete a range

Having a function for

db.del({
    start: "a"
    , end: "b"
})

Would be useful.

Use-case here is to have mechanism to purge old data mainly

db.del({
    start: "name:"
    , end: "name:someTs"
})

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.