persvr / perstore Goto Github PK
View Code? Open in Web Editor NEWCommonJS persistence/object storage based on W3C's object store API.
Home Page: http://www.persvr.org/
CommonJS persistence/object storage based on W3C's object store API.
Home Page: http://www.persvr.org/
When trying to install perstore (v 0.3.0 & 0.3.1) there is a failure caused by the json-schema dependencies. Perstore is requesting >=0.2.1, but only 0.2.0 exists in npm. Could you publish 0.2.1 to npm?
When storing an instance with links, they are simply filtered out before and re-added after store. They should be re-queried every time though.
couchdb defines the attribute uri for its http-client request, while the request itself expects the attribute url
When an invalid date passes through json-ext, it is returned as new Date(NaN), which isn't nice for clients
line 26
if (fs.statSync(filename).isFile()){
should be
if (fs.statSync(fp).isFile()){
because filename is undefined
Persistent does not respect the query options as implied in the documentation when it comes to setting options.start and options.end. This likely affects Memory and ReadOnly as well. The following code does not respect the query options as expected. It will instead return all items, instead of only the second item:
var store = require("perstore/stores").DefaultStore({
path: "data",
filename: "example.json"
});
var results = store.query("", {
start: 1,
end: 1
});
console.log(results.length);
The equivalent rql does appear to function:
var results = store.query("limit(1,1)", {});
many of the other packages that make up persevere (pintura) are published to npm. perstore and pintura seem to be the only exceptions.
It appears that some modules attempt to detect AMD, but if they do, the fail to return properly when trying to require() in modules.
Using Dojo's dojo/node
module to require in perstore like the following:
define([
"dojo/node!perstore/stores",
"dojo/node!perstore/model"
], function(stores, model){
// ...
});
Will return the following output:
/node_modules/perstore/errors.js:2
var AccessError = exports.AccessError = ErrorConstructor("AccessError");
^
TypeError: undefined is not a function
at Object.<anonymous> (/Users/kitsonk/github/dote/node_modules/perstore/errors.js:2:41)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:362:17)
at require (module.js:378:17)
at Object.<anonymous> (/Users/kitsonk/github/dote/node_modules/perstore/facet.js:7:21)
at Module._compile (module.js:449:26)
at Object.Module._extensions..js (module.js:467:10)
Because there is a globally defined define()
the util/extend-error.js
fails to export the properly for the calling code, which is not AMD aware.
While likely related to #40, this is a separate issue in that Persistent (and I assume Memory and ReadOnly) do not set the results.totalCount
. It always returns as undefined
.
For example, the following code, you would expect totalCount
to be set:
var store = require("perstore/stores").DefaultStore({
path: "data",
filename: "example.json"
});
var results = store.query("limit(1,0)", {});
console.log(results.totalCount);
This seems to throw the error:
return responseToArray(server.GET(db.name + "/_someview" + query, directives)).map(function(object) {
function(object) {
return objectToDocument(response);
}
});
Doest this do the trick?!:
return responseToArray(server.GET(db.name + "/_someview" + query, directives)).map(function(object) {
return objectToDocument(response);
});
in FacetedStore
there is a for
loop to copy properties from facetSchema
to constructor
. if facetSchema
contains certain properties (name
, caller
, length
, etc) an error will be thrown in strict mode.
for(i in facetSchema){
constructor[i] = facetSchema[i];
}
most of these properties are unlikely to be in a schema except that i came across this because i was using name
in my schemas. should we just wrap that assignment in a try/catch to silence the errors?
for(i in facetSchema){
try {
constructor[i] = facetSchema[i];
}
catch (e) {}
}
Error thrown when executing a simple test script exclaims that mysql_bindings is missing:
nodules
node.js:50
throw e;
^
Error: Cannot find module 'node-mysql-libmysqlclient/mysql_bindings'
at loadModule (node.js:234:13)
at require (node.js:272:14)
at /usr/local/lib/node/.npm/nodules/0.2.1/package/lib/nodules.js:266:32
at Object. (file://../node-mysql-libmysqlclient/mysql-libmysqlclient.js:13:15)
at /usr/local/lib/node/.npm/nodules/0.2.1/package/lib/nodules.js:236:23
at jar:http://github.com/kriszyp/perstore/zipball/v0.2.1!/engines/node/lib/store-engine/sql.js:19:9
at Object.defaultDatabase (jar:http://github.com/kriszyp/perstore/zipball/v0.2.1!/lib/store/sql.js:262:20)
at Object.SQLStore (jar:http://github.com/kriszyp/perstore/zipball/v0.2.1!/lib/store/sql.js:30:44)
at Object. (file:///build/nodeTest/lib/index.js:5:43)
at /usr/local/lib/node/.npm/nodules/0.2.1/package/lib/nodules.js:236:23
Example is mostly copied and pasted from existing example.
var store = require("perstore/store/sql").SQLStore({
type: "mysql",
table: "user",
idColumn: "id"
});
// now we can setup a model that wraps the data store
var MyModel = require("perstore/model").Model("User", store, {
properties: {
// we can define optionally define type constraints on properties
id: Integer,
name: String
}
});
var someObject = MyModel.new(); // retrieve a persisted object
someObject.id = 1; // make a change
someObject.name = "Trakkasure"; // make a change
someObject.save(); // and save it
Apparently links are not (always) resolved for model.query.
Please, consider http://gist.github.com/383784
(10:24:30 PM) Vladimir: deanlandolt pointed out the difference between start()/end() which is used to instruct the store backend to return partial data, and slice(), which just narrows the result set already fetched from the backend
(10:26:28 PM) Vladimir: we should always fetch totalCount, regardless of directives.start/end being specified, or the whole if(totalCountPromise){... branch is missed, leading to incorrect results
(10:27:29 PM) Vladimir: pintura should wait until responseValue promise fulfilled, or we never get totalCount and never report correct Content-Range:
(10:30:06 PM) Vladimir: pintura should not guess of how much data store.query() provides, it should just rely on metadata.start/end which, in turn, can be mangled by the store. That way the consistency keeps: when we set Range: items=1-2 for GET /Obj/?start(3)&end(6), the store reports obj[3:6] and not obj[1:2](10:30:25 PM) Vladimir: All these fixed in the gist
(10:30:26 PM) Vladimir: TIA
It appears that passing in of path
upon construction of a store does not properly invoke setPath()
which then causes the store/notifying
wrapper to not initialise its message hub properly. For example:
var store = require("perstore/store").DefaultStore({
path: "store",
filename: "bar.json"
});
store.add({ foo: "bar" });
Will produce something like:
/node_modules/perstore/store/notifying.js:66
localHub.publish({
^
TypeError: Cannot call method 'publish' of undefined
at exports.Notifying.store.add (/node_modules/perstore/store/notifying.js:66:14)
at exports.when (/node_modules/perstore/node_modules/promised-io/promise.js:360:29)
at Object.exports.Notifying.store.add (/node_modules/perstore/store/notifying.js:64:11)
at Object.defineProperties.save.value (/node_modules/perstore/facet.js:432:27)
at Object.constructor.add (/node_modules/perstore/facet.js:203:39)
Where as the following removes the issue:
var store = require("perstore/store").DefaultStore({
path: "store",
filename: "bar.json"
});
store.setPath("store");
store.add({ foo: "bar" });
The Indexed Database API, or Indexed DB (formerly WebSimpleDB).
You might want to update your docs to reflect this now.
Looks like al the browsers are going to do Indexed Database API - even Microsoft
When using ringojs and deploying the war to servlet container like tomcat, the following error appears:
JavaException: java.io.FileNotFoundException: /Applications/springsource/sts-3.1.0.RELEASE/STS.app/Contents/MacOS/local.json (No such file or directory) trying to load local.json, make sure local.json is in your current working directory (perstore/util/settings.js#19)
at perstore/util/settings.js:19
at perstore/store/filesystem.js:13 (anonymous)
at pintura/media/html.js:5
It's because perstore/util/settings.js assumes that local.json is in the current working directory. However, when using an application server, the work directory is the directory of the app server, not the directory of the war.
it takes a lot of mental energy to follow some of this code (particularly facet.js). a few more comments might help reduce the effort it takes to follow the logic. i've also been finding (and trying to remember to report) little logic errors or dead pieces of code while trying to understand it, so taking the time to add some comments may find some more of this dead or wrong code.
They changed "file" module to "fs", hence update needed.
Please, take a look at http://pastebin.com/ZY9yBL3F
In tracing a bug, I discovered that JSONExt.stringify() in memory.js was improperly serializing my data objects. An Object like (from console.log):
{ state: 'processing',
keywords: [],
people: [ { type: 'person', val: '2016252856702' } ],
exported: false,
held: false,
startDate: Wed, 27 Jun 2012 20:43:50 GMT,
endDate: Wed, 27 Jun 2012 20:43:50 GMT,
documentCount: 0,
processingStatus: 0,
createdBy: [ 'system' ],
createdOn: Wed, 27 Jun 2012 20:43:59 GMT,
id: 6726200953126 }
was is serialized (with JSONExt.stringify()) as:
{"state":"processing","keywords":[],"people":[undefined],.......
I modified the code to use JSON.stringify (node's), and it gets:
{"state":"processing","keywords":[],"people":[{"type":"person","val":"2016252856702"}],.....
Is there any explanation of this behavior outside of a bug in the serializer and is there any reason to not defer to the env (eg, jsonext handles circular references..though is not an an issue for me))?
It references tunguska instead of perstore
In facet.js' construction.post method, if (!directives.id), the call to this.add doesn't pass the directives object (even if there are other directives). Unless there is some reason to specifically exclude this, it should be 'return this.add(props, directives)' instead of 'return this.add(props)'.
i see there is new code for Couchdb. Yeah !
Since couch does replication your middle tier replication will do similar.
It would be cool is a behaviours can be fed into your replication layer. This owuld allow db sharding and other patterns for example.
Withc of course brings up Map reduce and running many queries against the many db instances and then reducing them back down.
Are you planning to do this in your code ? I think is would be a good idea.
i believe there is already a few js libraries floating around that already do this.
from what i understand, the return value for add
should be the id. however, the implementation for construct.add
returns the result of save
which returns the object.
notifying store expects an id to be returned from add
and so the channel value is '[object Object]'
when it tries to assign the return value to the channel.
Hello all.
Is there a plan to add support for postgres in node/store-engine/sql.js?
I know threre is support but for Rhino, but I'm trying to avoid java.
the facet/model used for the permissive tests is created via the following code that uses an existing (Permissive) model
as the appliesTo
var permissiveFacet = Permissive(model, {
extraStaticMethod: function(){
return 4;
}
});
if permissiveFacet.get(1)
is called, the object returned does not include the properties from the prototype of model
.
i'm not sure what the correct behavior should be. if the schema for permissiveFacet
has no prototype, is it supposed to use the prototype from the model
facet? would the behavior be different for the Restrictive facet?
i'm thinking that for Permissive, if no prototype was provided by the new schema then use the existing prototype. for Restrictive, if no prototype was provided by the new schema use an empty prototype. is that right?
it looks like the patching of json-schema's coerce is a little over zealous -- it will coerce for required values that don't actually have keys, which causes inserts that should fail to succeed with, for instance, an empty string
I'm getting really bad performance with MongoDB. If I use standard file based persistence, I get about ~700 writes per second. When I put MongoDB in, I get ~150 writes per second. With Redis it's about ~650 writes per second. These are saves done via HTTP connection so it will slow it down but I did expect that MongoDB would perform as good or even better than static files.
Any ideas why this is happening? Is it driver issue or something else? I'm using MongoDB 1.6.5 64bit on OSX with standard configuration.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.