Coder Social home page Coder Social logo

uhop / stream-json Goto Github PK

View Code? Open in Web Editor NEW
916.0 14.0 46.0 825 KB

The micro-library of Node.js stream components for creating custom JSON processing pipelines with a minimal memory footprint. It can parse JSON files far exceeding available memory streaming individual primitives using a SAX-inspired API.

License: Other

JavaScript 100.00%
streaming-json javascript-objects parse-json-files parser stream-components stream-processing

stream-json's Introduction

stream-json NPM version

stream-json is a micro-library of node.js stream components with minimal dependencies for creating custom data processors oriented on processing huge JSON files while requiring a minimal memory footprint. It can parse JSON files far exceeding available memory. Even individual primitive data items (keys, strings, and numbers) can be streamed piece-wise. Streaming SAX-inspired event-based API is included as well.

Available components:

  • Streaming JSON Parser.
    • It produces a SAX-like token stream.
    • Optionally it can pack keys, strings, and numbers (controlled separately).
    • The main module provides helpers to create a parser.
  • Filters to edit a token stream:
    • Pick selects desired objects.
      • It can produces multiple top-level objects just like in JSON Streaming protocol.
      • Don't forget to use StreamValues when picking several subobjects!
    • Replace substitutes objects with a replacement.
    • Ignore removes objects.
    • Filter filters tokens maintaining stream's validity.
  • Streamers to produce a stream of JavaScript objects.
    • StreamValues can handle a stream of JSON objects.
      • Useful to stream objects selected by Pick, or generated by other means.
      • It supports JSON Streaming protocol, where individual values are separated semantically (like in "{}[]"), or with white spaces (like in "true 1 null").
    • StreamArray takes an array of objects and produces a stream of its components.
      • It streams array components individually taking care of assembling them automatically.
      • Created initially to deal with JSON files similar to Django-produced database dumps.
      • Only one top-level array per stream is valid!
    • StreamObject takes an object and produces a stream of its top-level properties.
      • Only one top-level object per stream is valid!
  • Essentials:
    • Assembler interprets a token stream creating JavaScript objects.
    • Disassembler produces a token stream from JavaScript objects.
    • Stringer converts a token stream back into a JSON text stream.
    • Emitter reads a token stream and emits each token as an event.
      • It can greatly simplify data processing.
  • Utilities:
    • emit() makes any stream component to emit tokens as events.
    • withParser() helps to create stream components with a parser.
    • Batch batches items into arrays to simplify their processing.
    • Verifier reads a stream and verifies that it is a valid JSON.
    • Utf8Stream sanitizes multibyte utf8 text input.
  • Special helpers:
    • JSONL AKA JSON Lines AKA NDJSON:
      • jsonl/Parser parses a JSONL file producing objects similar to StreamValues.
        • Useful when we know that individual items can fit in memory.
        • Generally it is faster than the equivalent combination of Parser({jsonStreaming: true}) + StreamValues.
      • jsonl/Stringer produces a JSONL file from a stream of JavaScript objects.
        • Generally it is faster than the equivalent combination of Disassembler + Stringer.

All components are meant to be building blocks to create flexible custom data processing pipelines. They can be extended and/or combined with custom code. They can be used together with stream-chain to simplify data processing.

This toolkit is distributed under New BSD license.

Introduction

const {chain}  = require('stream-chain');

const {parser} = require('stream-json');
const {pick}   = require('stream-json/filters/Pick');
const {ignore} = require('stream-json/filters/Ignore');
const {streamValues} = require('stream-json/streamers/StreamValues');

const fs   = require('fs');
const zlib = require('zlib');

const pipeline = chain([
  fs.createReadStream('sample.json.gz'),
  zlib.createGunzip(),
  parser(),
  pick({filter: 'data'}),
  ignore({filter: /\b_meta\b/i}),
  streamValues(),
  data => {
    const value = data.value;
    // keep data only for the accounting department
    return value && value.department === 'accounting' ? data : null;
  }
]);

let counter = 0;
pipeline.on('data', () => ++counter);
pipeline.on('end', () =>
  console.log(`The accounting department has ${counter} employees.`));

See the full documentation in Wiki.

Companion projects:

  • stream-csv-as-json streams huge CSV files in a format compatible with stream-json: rows as arrays of string values. If a header row is used, it can stream rows as objects with named fields.

Installation

npm install --save stream-json
# or: yarn add stream-json

Use

The whole library is organized as a set of small components, which can be combined to produce the most effective pipeline. All components are based on node.js streams, and events. They implement all required standard APIs. It is easy to add your own components to solve your unique tasks.

The code of all components is compact and simple. Please take a look at their source code to see how things are implemented, so you can produce your own components in no time.

Obviously, if a bug is found, or a way to simplify existing components, or new generic components are created, which can be reused in a variety of projects, don't hesitate to open a ticket, and/or create a pull request.

Release History

  • 1.8.0 added an option to indicate/ignore JSONL errors. Thx, AK.
  • 1.7.5 fixed a stringer bug with ASCII control symbols. Thx, Kraicheck.
  • 1.7.4 updated dependency (stream-chain), bugfix: inconsistent object/array braces. Thx Xiao Li.
  • 1.7.3 added an assembler option to treat numbers as strings.
  • 1.7.2 added an error check for JSONL parsing. Thx Marc-Andre Boily.
  • 1.7.1 minor bugfix and improved error reporting.
  • 1.7.0 added utils/Utf8Stream to sanitize utf8 input, all parsers support it automatically. Thx john30 for the suggestion.
  • 1.6.1 the technical release, no need to upgrade.
  • 1.6.0 added jsonl/Parser and jsonl/Stringer.

The rest can be consulted in the project's wiki Release history.

stream-json's People

Contributors

bitpup avatar d1no avatar delta62 avatar dependabot[bot] avatar greenkeeper[bot] avatar gregolsky avatar maboily avatar pbardov avatar robcumulio avatar uhop avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

stream-json's Issues

(Question) Reassembling parts of the output stream

So is there a utility or writer (that I am not finding) that would just reassemble the parts of the JSON input stream that was parsed and evented and just write the JSON back to another output string or stream?

Kind of the opposite of what is done in Parser (but easier since you had to do all the heavy lifting there) as there doesn't need to be any object creation at all for a pure Filter. One could just output the appropriate JSON symbols to represent the name/value pairs, nested objects and arrays and not waste time and space by building objects in memory - correct?

Again my questions are related to filtering out specific elements by key and not processing the JSON input any more than that.

Fails to parse JSON when first values are invalid

I'm trying to use this with a TCP server to parse out JSON objects as they come in the input stream. For testing I'm connecting to the server via telnet which has some header information sent across initially. This information is obviously not valid JSON. What seems to occur is that this header information gets stuck in the _buffer thus causing the first character to never register as a valid initial character.

This can be seen on line 85 of Combo.js:

match = value1.exec(this._buffer);
if (!match) {
    if (this._buffer) {
        if (this._done) {
            return callback(new Error("Parser cannot parse input: expected a value"));
        }
    }
    if (this._done) {
        return callback(new Error("Parser has expected a value"));
    }
    // wait for more input
    break main;
}

It seems to just keep waiting for more input which only increases the buffer but never gets past the initial 'value' state.

Stream cannot be closed programmatically

While streaming JSON file, I would like to have an ability to close stream before it reaches end of the file. I've tried this code:

let pickReadStream = fs.createReadStream('dataset.json');
let source = createSource();
pickReadStream.pipe(source.input);

source.output.on('data', (chunk) => {
    // do some processing
    checkIfNeedToBeClosed();
});

function checkIfNeedToBeClosed() {
    // here I've tried all these lines
    pickReadStream.close();
    pickReadStream.destroy();
    source.output.unpipe(source.input);
    pickReadStream.removeAllEventListeners();
    source.input.destroy(); // this stops stream, but throws Error: stream.push() after EOF
}

Nothing helps, every time file is read till the end. Any ideas?

Disassembler - add support for `.toJSON()`

An option to call Date.prototype.toJSON (and friends) would be appreciated.

const Disassembler = require('json-stream/Disassembler');
const {Readable} = require('stream');

const src = new Readable({objectMode: true, highWaterMark: 16});

src.pipe(new Disassembler())
   .on('data', data => console.dir(data));

src.push(new Date());
src.push(null);

Will output {name: 'startObject'} followed by {name: 'endObject'}.

An example use case might be stringify'ing a stream of fs.Stats objects. Such might be the case in a gulp pipeline.

Events still emitted from Filter despite returning false from filter function

I'm trying to use a custom filter to only output a given array from an input and send into StreamArray, but having great difficulty doing so. To get some kind of control test I used the following filter function, will return return true for events inside and including the first array of my test data.

var filter = new Filter({ filter: function (path, event) {
  if (event.name === "startArray") arrayDepth++;
  else if (event.name === "endArray") arrayDepth--;
  console.log(path, event, arrayDepth > 0);
  return arrayDepth > 0;
}});

var next = fs.createReadStream('input.json')
  .pipe(parser).pipe(streamer).pipe(filter).pipe(packer).pipe(streamArray);

Where test data is {"test":[]}

I expect the first event emitted from the filter to be a startArray, but this is not the case.

[] { name: 'startObject' } false
[ 'test' ] { name: 'startArray' } true
events.js:141
      throw er; // Unhandled 'error' event
      ^
Error: Top-level object should be an array.

Parser just gives up on invalid JSON

I'm trying to use this module in a real world scenario.
It looks like if invalid JSON is parsed, the Parser just gives up and throws an Error.
Wouldn't it be more appropriate to emit an 'error' event and only throw the Error if there is no error handler attached?
Looks like it's fairly simply to add this feature. Shall I do it and make a pull request?

Unhandled error event during parsing

Version: 0.5.2

When trying to parse an invalid JSON document the following error occurs:

events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: Parser cannot parse input: expected an object key
    at Parser._processInput (d:\someproject\node_modules\stream-json\Combo.js:232:23)
    at Parser.flush [as _flush] (d:\someproject\node_modules\stream-json\Combo.js:37:7)
    at Parser.<anonymous> (_stream_transform.js:118:12)
    at Parser.g (events.js:292:16)
    at emitNone (events.js:86:13)
    at Parser.emit (events.js:185:7)
    at prefinish (_stream_writable.js:502:12)

The problem is that I can not catch this error with my code. I have tried the following:

[...]
jsonStreamer.on('end', () => {
    Logger.debug(`Streaming finished`);
    resolve(data);
});

jsonStreamer.on('error', (error) => {
    console.log(error); // <= this is not called
});

try {
    fs.createReadStream(filePath).pipe(jsonStreamer.input);
}
catch (error) {
    console.log(error); // <= this is not called either
}

The only way I was able to catch this error was with a global error listener:

process.on('uncaughtException', function (err) {
    reject(err);
});

The invalid JSON file I am trying to parse with stream-json just contains a single curly brace:

{

newlines, tabs and other control characters dropped from strings

Hi, I'm running into an issue where newlines, tabs, etc seem to be stripped out of the strings in the JSON... or maybe I'm doing something wrong? (I don't think so...)

Here's a small test program to demonstrate what I'm seeing.

var Parser = require("stream-json/Parser");
var Streamer = require("stream-json/Streamer");
var Packer = require("stream-json/Packer");
var Emitter = require("stream-json/Emitter");

var Readable = require('stream').Readable;
var util = require('util');

util.inherits(ReadableStringStream, Readable);

function ReadableStringStream(str)
{
    this.hasData = true;
    this.str = str;
    Readable.call(this, {});
}
ReadableStringStream.prototype._read = function() {
    if (this.hasData)
    {
    this.push(new Buffer(this.str, 'ascii'));
    this.hasData = false;
    }
    else
    {
        this.push(null);
    }
};

var parser = new Parser();
var streamer = new Streamer();
var packer = new Packer({
    packKeys: true,
    packNumbers: true,
    packStrings: true
});
var emitter = new Emitter();

var stack = []; // stack to keep track of ancestors of the current tip
var basin = {}  // an outer object to hold whatever it is we are building
var tip = basin; // the current object or array that we are working on
var curKey = 'root'; // the last key value processed

function store(x, psh)
{
    if (Array.isArray(tip))
    {
        tip.push(x);
    }
    else
    {
        tip[curKey] = x;
    }
    if (psh)
    {
        stack.push(tip);
        tip=x;
    }
    return x;
}

emitter.on("startObject", function() {
    store({}, true);
});
emitter.on("endObject", function() {
    tip = stack.pop();
});
emitter.on("startArray", function() {
    store([], true);
});
emitter.on("endArray", function() {
    tip = stack.pop();
});
emitter.on("keyValue", function(key) {
    curKey = key;
});
emitter.on("numberValue", function(str) {
    store(+str);
});
emitter.on("nullValue", function() {
    store(null);
});
emitter.on("trueValue", function() {
    store(true);
});
emitter.on("falseValue", function() {
    store(false);
});
emitter.on("stringValue", function(str) {
    store(str);
})

var originalJson = JSON.stringify({
    "stringWithTabsAndNewlines": "Did it work?\nNo...\t\tI don't think so...",
    "anArray": [1,2,true,"tabs?\t\t\t",false]
});

console.log("original JSON:");
console.log(originalJson);
console.log();

emitter.on('finish', function() {
    console.log('filtered JSON:');
    console.log(JSON.stringify(basin.root));
});

new ReadableStringStream(originalJson).
    pipe(parser).
    pipe(streamer).
    pipe(packer).
    pipe(emitter);

Here's the output:

original JSON:
{"stringWithTabsAndNewlines":"Did it work?\nNo...\t\tI don't think so...","anArray":[1,2,true,"tabs?\t\t\t",false]}

filtered JSON:
{"stringWithTabsAndNewlines":"Did it work?No...I don't think so...","anArray":[1,2,true,"tabs?",false]}

Any ideas? Thanks.

How to parse a stream of undelimited objects?

Thanks for developing this library! The design of the many standalone modular pieces is great.

I have a JSON stream which looks something like this:

["name1","stringvalue"]["name2",{"key":"value"}]["name3":"stringvalue"]

What I want to do is to set up a JSON streamer that emits each individual JSON entity whenever it reaches the end of the entity, and then "resets" and starts reading the next entity. For instance, in the above example, I would like to do something such as,

emitter.on("wholeObject", (value) => {
    console.log(value[0], " => ", value[1]);
});

and then see in the console,

name1 => stringvalue
name2 => {key: "value"}
name3 => stringvalue

In my code, I then pass the value to a specialized handler function that depends on the "name" field.

When I pass a string such as this to one of the endpoints, I get a parse error. Any tips?

[Question] How to fully read certain properties of a JSON object?

Hi. Thank you for this library.

After poking around the docs, I can't seem to figure out how to accomplish fully reading

  • id
  • name
  • extensions

from this schema:

{
    "id": 2,
    "name": "Some Name",
    "extensions": ["Ext1", "Ext2"],
    ...(other_properties_with_lots_of_data)
}

with the current code I'm using:

new Promise<T>((r, rj) => {                                 
	const result = {} as any;                               
	chain([                                                 
		fs.createReadStream(filePath, { encoding: "utf8" }),
		parser(),                                           
		//@ts-ignore                                        
		filter({ filter: keyFilter }),                      
		//@ts-ignore                                        
		streamObject(),                                     
	]).on("data", (data: any) => {                          
		result[data.key] = data.value;                      
	}).on("end", () => {                                    
		r(result as T);                                     
	});                                                     
});                                                                                                         

For extensions I get an empty array (by design).

I could create a separate pipeline just to stream that one array property, but that feels wrong.
It looks like I may need to create a pipeline that filters on those properties, then use an emitter and manually process the values (startArray, etc)?
Is that the correct approach?

Add type annotations to data-event

Hi,

From what I see, the data event on the stream accepts "any" as an argument. However in case of StreamValues the value is known, so I believe that having an annotation to point to {key: number, value: any} would be a good start. What do you think? Not sure how easy to plug that in though given that StreamValues.withParser() returns a Chain.

Better error reporting.

Transferred from #9: we have to report errors better for dirty data.

We should provide column/row and byte offsets for a found error explaining it, if we can, so users can easily find and verify problems with their data. Only two parsers (the classic one and the alternative one) tracking such data, which is fine, but they should report it in a more user-friendly way.

How to pause stream properly when streaming array pieces?

Hi,
I'm trying to stream data from an array json file. The file is constructed pretty much this way:
[ [timestamp, value], [timestamp, value], ... ]
I'm using StreamArray.withParser() to parse the readstream on the json object.
I'm trying to implement an api where a call to /api/data (using express and express router) would send me e.g. 50 elements from the top level array. This could be done so that the front-end would specify which 50 elements to send or alternatively the backend just always send the next 50 elements. How can I achieve this? I have tried to use pipeline.pause() inside data event but I'm unable to pipeline.resume() properly on a subsequent api call. Should I be able to specify the start and end values of the elements? I.e. StreamArray.withParser({ start: 150, end: 200 }).

Noobish Question.... Wondering if you can help

Is it possible for source.on("endObject") to show the object it just read?

Example you have some complex json in a file like so:

[{object1},{object2},{object3}]

If I am reading this right, endObject would emit after the stream has piped one object through right?

but if I do something like

source.on("endObject", function(data){ 
    console.log(data);
});

data is undefined. Anyway to have the object be accessible during the stream?

Consider using post-0.x node.js stream API.

New API is more flexible, it legalizes a lot of former hacks, and introduces simple ways to create streamable components. Assuming people are in a post-0.x world, the code can be somewhat simplified.

Single quote char for field names gives Error: Parser cannot parse input: expected an object key

Hi,
Great lib.
Trying to process a file with an array of
[
{
'type': 'Feature',
'properties': {
'POSTCODE': '91210',
'NAME': 'Glendale',
'STATE': 'CA',
'ST_FIPS': '06'
},
'geometry': {
'type': 'Polygon',
'coordinates': [
[
[-118.259716, 34.144855],
[-118.259711, 34.144868],
[-118.259694, 34.144867],
[-118.2597, 34.144849],
[-118.259716, 34.144855]
]
]
}
},.......

but get
Error: Parser cannot parse input: expected an object key

when I replace the single quote char with double quote all is well.
We have no control of the file format as coming from a 3rd party.
Is there a config setting or the like i need to set to be able to process the file as is.
Thanks for your time.
John

Fails to verify a valid JSON

This fails:

import * as Verifier from "stream-json/utils/Verifier"
import * as fs from "fs"

const algo = JSON.parse(fs.readFileSync("./algo.json", "utf8"))
console.log("Parsed algo.json. Sample:", algo.kind)

fs.createReadStream("./algo.json", { encoding: "utf8" } )
  .pipe(new Verifier())

JSON file: algo.zip

Output:

Parsed algo.json. Sample: data.bulkData

/home/guille/repos/livestories/node_modules/stream-json/utils/Verifier.js:76
    const error = new Error('ERROR at ' + this._offset + ' (' + this._line + ', ' + this._pos + '): ' + msg);
                  ^
Error: ERROR at 393213 (15933, 13): Verifier cannot parse input: expected a value
    at Verifier._makeError (/home/guille/repos/livestories/node_modules/stream-json/utils/Verifier.js:76:19)
    at Verifier._processInput (/home/guille/repos/livestories/node_modules/stream-json/utils/Verifier.js:106:81)
    at Verifier._write (/home/guille/repos/livestories/node_modules/stream-json/utils/Verifier.js:67:10)
    at doWrite (_stream_writable.js:415:12)
    at writeOrBuffer (_stream_writable.js:399:5)
    at Verifier.Writable.write (_stream_writable.js:299:11)
    at ReadStream.ondata (_stream_readable.js:693:20)
    at ReadStream.emit (events.js:198:13)
    at ReadStream.EventEmitter.emit (domain.js:448:20)
    at addChunk (_stream_readable.js:288:12)

My Large json files fails to parse with an error

I have a DB dump file which has data from all database tables(240) in a JSON format. The file has JSON object with database name as key and array of rows in the table as value. Some of these tables have almost 10,000 or more rows. When I try to parse this json file using StreamObject utility, it fails with the following exception after parsing first table data.

 events.js:141
     throw er// Unhandled 'error' event

Error: Parser cannot parse input: unexpected characters
    at Parser._processInput (/Users/projects/node_modules/stream-json/Combo.js:472:23)
    at Parser.transform [as _transform] (/Users/projects/node_modules/stream-json/Combo.js:31:7)
    at Parser.Transform._read (_stream_transform.js:167:10)
    at Parser.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:300:12)
    at writeOrBuffer (_stream_writable.js:286:5)
    at Parser.Writable.write (_stream_writable.js:214:11)
    at ReadStream.ondata (_stream_readable.js:536:20)
    at emitOne (events.js:77:13)
    at ReadStream.emit (events.js:169:7)
    at readableAddChunk (_stream_readable.js:153:18)
    at ReadStream.Readable.push (_stream_readable.js:111:10)
    at onread (fs.js:1739:12)
    at FSReqWrap.wrapper [as oncomplete] (fs.js:576:17)

Here is how I am trying to parse the file. Please let me know if I am missing something. It would be good to continue parsing even if there is a problem parsing one table.

const js = require('stream-json/utils/StreamObject').make();

const fs = require('fs');

var stream = fs.createReadStream(filePath, {encoding: 'utf8'});

    js.output.on("data", function(data){
        console.log(data.index, "--", data.value);
    });

    js.output.on("end", function(){
       console.log("Finish called");
    });

    js.output.on("error", function(err){
        console.log("error !!!");
    });

    return stream.pipe(js.input);

Is Chunking Supported?

Thanks for the awesome util! When using StreamArray (or some other method), is there a way to set a number of objects to parse at a time, instead of just one at a time?

As things are:
data => //data is only one object, i.e. { value: ... key: ... }

What I want:
data => //data is an array of objects of max length X where X is my specified chunk size.

This would make database imports from JSON far more efficient. I can't find a way to do it from the docs though. Any ideas?

Question: can StreamJsonObjects accept an empty stream?

I am using StreamJsonObjects to read a stream of json objects, and usually it works as expected. But StreamJsonObjects seems to want at least one object to be in the stream. It emits an error event if the stream happens to be empty, meaning that it simply closes without generating any data.

Is there a good way to have StreamJsonObjects accept an empty stream and end gracefully, simply not emitting any data objects?

Thanks!

Filter the parent object based on an attribute

Hi,
You have no idea how much more enjoyable it is to work with big json (dump from mongo) with your library!

It covers all what I need so far, but to be able to skip a record based on an attribute. My format is

[{
active:trues
...tons of attribute}
,{
active:false
...tons of attribute}
}]

I would want to filter to keep only the active records, but if I use ignore, it will keep all the record, and simply remove the active attribute

 Am I missing something obvious?

As a workaround, I'd create a separate stream and write to it only the active records, but it'd seems a bit weird



Can you Stream Array on a sub property - where the top level is not an Array?

How would you use Stream-array on the following so we stream just the data property?

{
    'data':[ /* huge array */],
    'other_data':1,
    'more_data':2
}

currently I'm doing this...

var fs = require('fs');
var makeSource = require("stream-json");
var StreamArray = require("stream-json/utils/StreamArray");

var source = makeSource();
var stream = StreamArray.make();

stream.output.on("data", function(object){
  console.log(object.index, object.value);
});

stream.output.on("end", function(){
  console.log("done");
});

fs.createReadStream("./big_file.json", {encoding: 'utf8'}).pipe(stream.input);

but I need to filter the data first somehow...

m

Issue with multi-byte representations of unicode code points.

By looking at the source of Parser.js, I suspect an issue with characters that are represented in UTF8 by multiple bytes.

Example: ฤ may be encoded as [0x63, 0xcc, 0x8c], but there is no guarantee that these three bytes are passed to transform within the same buffer. And if for example it is passed within separate buffers [0x63, 0xcc] followed by [0x8c] then it will be decoded as the javascript string c๏ฟฝ๏ฟฝ before it is passed through to the actual parsing code.

Reading from program output multiple objects

Hi,

This library sounds great. However, I have no clue on how to handle my current situation: I have a program outputing multiple json objects with only a new line as a separator, and I want to have the objects parsed without waiting for the end of the program as it is a long running process. Is it something doable with any of the utilities provided in this package ?

Example output:

{"device":"a", "place":"home"}
{"device":"b", "place":"out"}
{"device":"a", "place":"out"}
...

Thanks,
Nicolas

Getting the full path

I'm trying to chunk a 12GB JSON file by streaming all 3rd level children. I've almost got it working but at the boundaries between chunks I get the wrong path.

I'm probably going about it wrong but this is a minimal example of how I'm trying to do it (mostly Frankensteined together from the examples and from some code @uhop has posted in response to other peoples' issues).

    var Parser = require('stream-json/Parser');
    const {streamObject} = require('stream-json/streamers/StreamObject');
    const {pick} = require('stream-json/filters/Pick');

    const {Transform, Writable} = require('stream');

    var currentPath;
    const filter = pick({
        filter: function(path) {
            currentPath = path;
            return path.length >= 3;
        }
    });

    const inspector = new Writable({
        objectMode: true,
        write (object, encoding, callback) {
            DO_BUSINESS_LOGIC_WITH_MY_DATA({
                path: currentPath.join("/"),
                key: object.key
                data: object.value
            }, callback);
        }
    });

    var file = fs.createReadStream(filename, {
        highWaterMark: 64 * 1024
    });
    var parser = new Parser();
    var pipeline = file
                        .pipe(parser)
                        .pipe(filter)
                        .pipe(streamObject())
                        .pipe(inspector);

This is mostly working but the problem is that currentPath is not always right once my inspector.write function is called. I think it's because filter parses the whole chunk before it gets to inspector.write and therefore path corresponds to the last path in a chunk, not the path of the object passed to inspector.write (which means it's wrong if a chunk starts on one path and ends on another).

Is there a better way to associate the full path with the object?

Parser emits numberChunks even if streamX options are all turned off

I have an issue with Parser packing values and Stringer using values. I'm packing everything since I need to do some operations on whole values. However I cannot get this to work. Please see below code:

import * as stream from "readable-stream";
import * as assert from "assert";
import * as Parser from "stream-json/Parser";
import { stringer } from "stream-json/Stringer";

const content = `{ "test": -1 }`;
const readable = new stream.Readable();
readable.push(content);
readable.push(null);

const parser = new Parser({
        packKeys: true,
        packStrings: true,
        packValues: true,
        packNumbers: true,
        streamNumbers: false,
        streamValues: false,
        streamKeys: false,
        streamStrings: false
});

let hasNumberChunk = false;
parser.on("data", x => hasNumberChunk = hasNumberChunk || x.name === "numberChunk");
parser.on("data", x => console.log("PARSER OUTPUT", x));

const stringerInstance = stringer({ useValues: true });
let output = "";
stringerInstance.on("data", data => output += data.toString());

stream.pipeline(
            readable,
            parser,
            stringerInstance, 
            (err) => {
                err ? done(err) : done();
                console.log("STRINGER OUTPUT", output);
                assert.ok(!hasNumberChunk);
                assert.strictEqual(output, content);
            });

This outputs the following:

PARSER OUTPUT { name: 'startObject' }
PARSER OUTPUT { name: 'keyValue', value: 'test' }
PARSER OUTPUT { name: 'startNumber' }
PARSER OUTPUT { name: 'numberChunk', value: '-' }
PARSER OUTPUT { name: 'numberValue', value: '-1' }
PARSER OUTPUT { name: 'endObject' }
STRINGER OUTPUT {"test":                                         

I think either I am doing something wrong or there are 2 issues:

  1. Parser configured this way should not emit numberChunk tokens.
  2. Stringer using values should skip tokens it's not told to stringify and continue if possible.

streaming large strings is slow

I have a need to stream json that includes very long strings (file content encoded in base64). When I stream it through this module, it's a lot slower than I expected. Digging into it a bit I found that the grammar is breaking strings down into 256 byte chunks (causing needless object allocations.) I've found that increasing the chunk size to 4096 (in the plainChunk grammar rule) boosts the speed significantly (I'm seeing about a 300% increase in speed with a minuscule memory footprint increase -- which puts the streamer almost back on par with a through stream that just copies content). I think it would be useful to add a configuration option to control this setting and have it well documented. For now I'll just modify the setting in my fork to 4096 (which I think is a much better default value anyway...)

Single string character in object value corrupted

We have a 200mb> .json file. Entry: {houses: [...]}
Found that strings sometimes corrupted with single character replaced by hex [fd, fd],

{ guid: 'b74f9557-2cbf-4824-a697-956a8b73c8e6',
  address: 
   { region: 
      { guid: 'e76abf09-3148-42f6-85db-51edb09e72b7',
        actual: true,
        aoGuid: '92b30014-4d52-4e2e-892d-928142b924bf',
        aoLevel: 1,
        formalName: 'ะกะฒะตั€ะดะปะพะฒัะบะฐั',
        offName: 'ะกะฒะตั€ะดะปะพะฒัะบะฐั',
        shortName: 'ะพะฑะป',
        regionCode: '66',
        subjectCity: false },
     settlement: 
      { guid: 'b5817b1a-0785-4db8-98eb-af80b17a6b91',
        actual: true,
        aoGuid: '328c93c6-e76f-4250-a877-ce06805929f7',
        aoLevel: 6,
        formalName: 'ะกะฒะพะฑะพะดะฝั‹ะน',
        offName: 'ะกะฒะพะฑะพะดะฝั‹ะน',
        shortName: 'ะฟะณั‚',
        parentGuid: '92b30014-4d52-4e2e-892d-928142b924bf',
        regionCode: '66',
        subjectCity: false },
     street: 
      { guid: '59970a86-f025-4e9e-abe3-eb2562fbc361',
        actual: true,
        aoGuid: '0b7b25ac-2aad-447e-98cc-b79130cf2164',
        aoLevel: 7,
        formalName: 'ะšะพัะผะพะฝะฐะฒั‚ะพะฒ',
        offName: 'ะšะพัะผะพะฝะฐะฒั‚ะพะฒ',
        shortName: 'ัƒะป',
        parentGuid: '328c93c6-e76f-4250-a877-ce06805929f7',
        regionCode: '66',
        subjectCity: false },
     house: 
      { guid: '88a1a433-f4a2-4c03-87ea-42d3b14f7ee7',
        actual: true,
        houseGuid: 'b7b63d3d-78ae-4750-b414-3e2c6b1ae46a',
        aoGuid: '0b7b25ac-2aad-447e-98cc-b79130cf2164',
        postalCode: '624790',
        houseNumber: '21',
        isAddedManually: false,
        estStatus: 'DOM',
        strStatus: 'NE_OPREDELENO',
        aggregated: false,
        houseTextAddress: '21' },
     formattedAddress: '624790, ะพะฑะป ะกะฒะตั€ะดะปะพะฒัะบะฐั, ะฟะณั‚ ะกะฒะพะฑะพะดะฝั‹ะน, ัƒะป ะšะพัะผะพะฝะฐะฒั‚ะพะฒ, ะด. 21',
     deprecated: false }

So

ะกะฒะตั€ะดะปะพะฒัะบะฐั

breaks with

ะกะฒะตั€๏ฟฝ๏ฟฝะปะพะฒัะบะฐั

No bugs in .json
Code used to extract values:

    var pipeline = chain([
      fs.createReadStream('./json/extracted/'+ForceRegionParse+'/houses.json'),
      parser(),
      pick({filter: 'houses'}),
      streamArray(),
      async data => {
        await CheckHouseAwait(data.value, connection, logger);
      }
    ]);

Feat Request: Batch on input stream boundary

Thanks for adding the Batch feature re #47

However, what would really be helpful is a similar mechanism that batches objects by the boundaries of the input stream's internal buffer. This is typically the best time to dispatch messages to workers since the main thread's EL is about to be busy reading/writing data between buffers in the streams above.

For example, every time 64kb worth of process.stdin's buffer data has been pushed thru the json parsing pipeline (i.e., after each 'data' event emitted by process.stdin has been parsed), I want to simply receive a single event on the other end containing the batch array of objects that completed parsing during that chunk. Since the main thread is about to be busy refilling the input stream's internal buffer (waiting for IO), this is the best time (performance-wise) for an application to decide if it wants to push buffered object data out to workers.

StreamArray doesn't play nice with async & await

I'm trying to process a huge JSON doc with 70k+ records. I'm testing this on a small data set of three objects to get the process function right.

Whilst StreamArray works nicely to go through them all, acting on them with further asynchronous functionality using await seems to break sequentiality.

The process function goes like so:

  1. look up mongo record
  2. if one exists, continue
  3. if one doesn't exist, create one.

The following code;

mongoose.connect('mongodb://localhost/vyda', { useMongoClient: true });
mongoose.Promise = global.Promise;

var StreamArray = require('stream-json/utils/StreamArray');
var stream = StreamArray.make();

stream.output.on('data', processObject);

fs.createReadStream(fname).pipe(stream.input);

async function processObject(obj){

  let practice, practitioner;

  let practiceName = obj.value.practice || obj.value.name;
  console.log('-- 1 - processing object #' + obj.index);
  console.log('-- 2 - searching for practice: ' + practiceName);

  // WHY ISNT THIS SYCNRONOUS!?
  let existingPractice = await models.Practice.find({name: practiceName}).exec();
  console.log('-- 3 - found existing practice: ', existingPractice);
  practice = await createPracticeObject(obj.value);
  console.log('-- 4 - practice created: ' + practice.name);
}

gives me the output;

-- 1 - processing object #0
-- 2 - searching for practice: BODYFOCUS PHYSIO P/L
-- 1 - processing object #1
-- 2 - searching for practice: BODYFOCUS PHYSIO P/L
-- 1 - processing object #2
-- 2 - searching for practice: CWC QLD PTY LTD
-- 3 - found existing practice:  []
-- 3 - found existing practice:  []
-- 3 - found existing practice:  []
-- 4 - practice created: BODYFOCUS PHYSIO P/L
-- 4 - practice created: BODYFOCUS PHYSIO P/L
-- 4 - practice created: CWC QLD PTY LTD

I would have thought that the sequence should run 1-2-3-4, 1-2-3-4, 1-2-3-4.

It gives me the feeling that this stream isn't synchronous. The docs say that it gets processed sequentially. What am I doing wrong?

Filter doesn't work as expected with subObjects

If parsing through a JSON object like the one below using new Filter({ filter: /hugeObj1\.child1/ }), the filter never pushes hugeObject1 onto its own _stack, thereby preventing all attempts to process data after this filter. The only way I can get it to work is to use new Filter({ filter: /child1/ }) as the filter, but then it allows processing both hugeObj1.child1 and hugeObj2.child1. Is this the expected result of Filter or am I trying to use it in a way that's not intended?

{
  "smallObj1": {},
  "smallObj2": {},
  "smallObj3": {},
  "hugeObj1": {
    "child1": { /* data */ },
    "child2": {}
  },
  "hugeObj2": {
    "child1": { /* data */ },
    "child2": {}
  },
  "smallObj4": {}
}

Howto: Reconstruct Objects from JSON Object array Stream

I'm streaming a very large array of Objects and cannot seem to comprehend how to pluck out the objects as a whole?

Example Stream Paylod:

[
{foo: "aaa1", bar: "bbb1", baz: "ccc1" },  
{foo: "aaa2", bar: "bbb2", baz: "ccc2" },  
{foo: "aaa3", bar: "bbb3", baz: "ccc3" },  
{foo: "aaa4", bar: "bbb4", baz: "ccc4" },  
{foo: "aaa5", bar: "bbb5", baz: "ccc5" }  
]

I have a stream pulling in the JSON (array) payload above and I want to be able to capture each object as a whole, as they are streamed in. I haven't been able to reconstruct the above objects without having to go through each keyvalue for example.

inboundStream
 .pipe(stream.input...) <-- Some combination of stream-json tooling
 .on('data', function(data) {
   // how do I reconstruct each object as they are streamed in--> {foo: "aaa1", bar: "bbb1", baz: "ccc1" }
});

Appreciate any input you can provide!

typescript support

is anyone thinking about adding typescript support for this library? :)

how to create a write stream?

Hi, I am trying to learn more about Node's stream library, and didnt see any examples about writing to a JSON file using stream-json, especially one record at a time. Would you be able to point me in the write direction on how to use stream-json to write a large JSON file in a efficient way? Thanks.

const fs = require( "fs" );

// Set timer
console.time("Timer");

var outputStream = fs.createWriteStream( __dirname + "/nostream.json" );

var members = [];
for(var i=0; i<(25000 * 18); i++){
    members.push({
        child: Math.random().toString(36).substring(7),
        parent: Math.random().toString(36).substring(7),
        propertyName: Math.random().toString(36).substring(7),
        provertyValue: Math.random().toString(36).substring(7)
    });
}

outputStream.write(JSON.stringify(members));
outputStream.end();

outputStream.on(
    "finish",
    function handleFinish() {
        console.timeEnd("Timer");
        // Timer: 1267.696ms
    }
);

Allow streaming of binary property data

Given a JSON document such as

{
  "foo": "bar",
  ...,
  "binary-checksum": <checksum>,
  "binary-length": <uint>,
  "binary-data": <binary-data-stream>
}

The parser should return a readable stream when accessing the "binary-data" instead of first loading it into memory before handing it out to the consumer.

Would that be possible?

Looking at the existing documentation


{name: "startKey"};
{name: "stringChunk", value: "actual string value"};
{name: "endKey"};

{name: "startString"};
{name: "stringChunk", value: "actual string value"};
{name: "endString"};

it seems to be a bit misleading. Does stringChunk mean that it will parse bits of the incoming data as binary string data? And how large would such a chunk be? Can this be controlled?

Inconsistent filter behavior

Consider the following simple JSON file

[
  {"data": {"a": 1,"b": 2}},
  {"data": {"a": 2,"b": 2}}
]

And the following code

const parser = require('stream-json');
const {filter} = require('stream-json/filters/Filter');
const {streamArray} = require('stream-json/streamers/StreamArray');

fs.createReadStream('test.json')
  .pipe(parser())
  .pipe(filter({filter: /data/}))
  .pipe(streamArray())
;

The data piped from this is

{ data: { a: 1, b: 2 } }
{ data: {} }
{ data: { a: 2 } }
{ data: { b: 2 } }

It's not clear to me whether the expected output should be as in the first line, or as the 3 subsequent lines, but this doesn't seem right. Am I doing something wrong?

Streaming/buffering based on object's property filled with massive array

Hey thanks for writing this library, it seems really flexible! I'm just starting to learn streams with Node so conceptually I don't understand the full power of the tool yet. I have a use case similar to the closed issue here: #6 regarding an object property that is a large array.

{
    'name':'test name',
    'dataset':[ { 'subobj1' : 'subval1', 'subobjchild1' : { ...} }, { 'subobj2' : 'subval2', 'subobjchild1' : { ...} } ... /* huge array of objects */],
    'description':'dataset from 5/16/18',
    'version':1
}

In my example the 'dataset' property is a massive array of objects that I need to process individually. I don't need to recurse down these objects, I just need to iterate through each object in obj.dataset. This file can get quite large so loading the entirety of the 'dataset' property in memory is out of the question. I found a question/answer on stack overflow that led me to this library but it doesn't represent my exact situation: (https://stackoverflow.com/questions/42896447/parse-large-json-file-in-nodejs-and-handle-each-object-independently)

My processing can be asynchronous, but the end result is each subobj in dataset should be run through something like JSON parse and then have its schema verified, go through some additional validation, and be inserted into a new database via mongoose.

I think there's a way to do this with JSONstream (https://stackoverflow.com/a/35043868/3316802) but I'm trying to learn and understand streams more and I'm curious if there's an easy solution to this using this library.

How pipe from StreamArray to Stringer

I have this:

const pipeline = chain([
  fs.createReadStream('./test.json'),
  parser(),
  streamArray(),
  stringer(),
]);

pipeline.on('data', d => {
  console.log('data:', d);
});

Here's what I get:

data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>
data: <Buffer 20>

My test.json file is an array containing valid JSON objects:

[
  {
    "a": 1,
    "b": [3, 4]
  },
  {
    "a": 2,
    "b": [5, 6]
  },
  {
    "a": 2,
    "b": []
  }
]

Stringer does not encode newlines in strings

const {disassembler} = require('stream-json/Disassembler');
const {stringer} = require('stream-json/Stringer')

function src () {
  const stream = new Readable({
    objectMode: true,
    read() {}
  })
  stream.push({ message: 'hello\nworld' });
  return stream
}


const pipeline = chain([
   src(),
   disassembler(),
   stringer()
]);

pipeline.pipe(process.stdout);

... outputs:

{"message":"hello
world"}

This is invalid JSON - it should output {"message":"hello\nworld"}

Parser failing on unexpected character

Context

I was originally trying to parse my package.json for this new project I'm setting up but always ended with some unexpected character error. I validated my JSON is valid on jsonlint.com. But I decided to try to duplicate your test scenario for the parser. I tried this on both node '4.5' and '6.9'.

'use strict';

const Parser = require('stream-json/Parser');
const parser = new Parser();
const Reader = require('stream').Readable;

class StringReader extends Reader {

    constructor(string, options) {
        super(options);
        this._string = string;
    }

    _read(size) {
        this.push(this._string, 'utf8');
    }
}

const stringReader = new StringReader('{"a": 1, "b": true, "c": ["d"]}')
const pipeline = stringReader.pipe(parser);
pipeline.on('data', (chunk) => {
    console.log(chunk);
});

I built a StringReader to to feed in your mock JSON. While I get the signaling for the data event and the chunks match up to the assertions on your test. I also get this uncaught exception error down below.

{ id: '{', value: '{' }
{ id: '"', value: '"' }
{ id: 'plainChunk', value: 'a' }
{ id: '"', value: '"' }
{ id: ':', value: ':' }
{ id: 'nonZero', value: '1' }
{ id: ',', value: ',' }
{ id: '"', value: '"' }
{ id: 'plainChunk', value: 'b' }
{ id: '"', value: '"' }
{ id: ':', value: ':' }
{ id: 'true', value: 'true' }
{ id: ',', value: ',' }
{ id: '"', value: '"' }
{ id: 'plainChunk', value: 'c' }
{ id: '"', value: '"' }
{ id: ':', value: ':' }
{ id: '[', value: '[' }
{ id: '"', value: '"' }
{ id: 'plainChunk', value: 'd' }
{ id: '"', value: '"' }
{ id: ']', value: ']' }
{ id: '}', value: '}' }
events.js:160
      throw er; // Unhandled 'error' event
      ^

Error: Parser cannot parse input: unexpected characters
    at Parser._processInput (/Users/lchan/Projects/bunnybus-cli/node_modules/stream-json/Parser.js:419:23)
    at Parser.transform [as _transform] (/Users/lchan/Projects/bunnybus-cli/node_modules/stream-json/Parser.js:23:7)
    at Parser.Transform._read (_stream_transform.js:167:10)
    at Parser.Transform._write (_stream_transform.js:155:12)
    at doWrite (_stream_writable.js:334:12)
    at writeOrBuffer (_stream_writable.js:320:5)
    at Parser.Writable.write (_stream_writable.js:247:11)
    at StringReader.ondata (_stream_readable.js:555:20)
    at emitOne (events.js:96:13)
    at StringReader.emit (events.js:188:7)

Usage Question: Simple and Efficient JSON Filtering

I am trying to do a simple filter on a JSON document the most efficiently as possible.
So given a JSON input document:

{
  "data": [{
    "type": "articles",
    "id": "1",
    "attributes": {
      "title": "JSON API paints my bikeshed!",
      "body": "The shortest article. Ever.",
      "created": "2015-05-22T14:56:29.000Z",
      "updated": "2015-05-22T14:56:28.000Z"
    },
    "relationships": {
      "author": {
        "data": {"id": "42", "type": "people"}
      }
    }
  }],
  "included": [
    {
      "type": "people",
      "id": "42",
      "attributes": {
        "name": "John",
        "age": 80,
        "gender": "male"
      }
    }
  ]
}

I need to be able to read from an input stream filter out all "id" elements but keep the exact structure/order of the document on the output stream.

The filtered output should look like:

{
  "data": [{
    "type": "articles",
    "attributes": {
      "title": "JSON API paints my bikeshed!",
      "body": "The shortest article. Ever.",
      "created": "2015-05-22T14:56:29.000Z",
      "updated": "2015-05-22T14:56:28.000Z"
    },
    "relationships": {
      "author": {
        "data": {"type": "people"}
      }
    }
  }],
  "included": [
    {
      "type": "people",
      "attributes": {
        "name": "John",
        "age": 80,
        "gender": "male"
      }
    }
  ]
}
  • "id" is arbitrary and occurs anywhere within the document so i need to recurse into the document elements and filter any "id" elements that are found no matter where in the document.

I am reading messages (documents) off a kafka topic so I am having to create a readable stream from a string object and then assembling the pipeline:


var Parser = require("stream-json/Parser");
var Streamer = require("stream-json/Streamer");
var Filter = require("stream-json/Filter");
var es = require('event-stream')

...

  function onMessage (message) {
    console.log("message received!");
    // console.log('%s read msg Topic="%s" Partition=%s Offset=%d', this.client.clientId, message.topic, message.partition, message.offset);
    var source = new Readable()
    source.push(message.value);
    source.push(null);
    var parser = new Parser();
    var streamer = new Streamer();
    var filter_any = '*';
    var f = new Filter({filter: filter_any});
    source.pipe(parser)
    .pipe(streamer)
    .pipe(f)
    .pipe(es.mapSync(function (data) {
      console.log(data);
    }));
  }

And I do see the "id" element parts being emitted when encountered using a '*' regex filter:
...

{ name: 'keyValue', value: 'id' }
{ name: 'startString' }
{ name: 'stringChunk', value: '1' }
{ name: 'endString' }
...

So what is the best way to filter out the "id" elements and reconstruct the document on the output stream?

  1. write some kind of function in the pipeline to detect 'keyValue', value: 'id' and then remove those parts? how would i remove the "parts" from the pipeline from 'keyValue', value: 'id' to the next "start*" part? - then the filter is not likely necessary
  2. use the filter function with some type of regex that would exclude "id" parts in the pipeline?

Thanks

Question: filter object and return new json stream

I'm having a hard time parsing the documentation for a way to parse a json stream, filter out a certain key by entire object and return a new json stream. I've managed to compose Parser, streamer and filter together but filter does not return a json stream but a sort of event stream. How can I turn the Filter output back into json?

example:

createServer()
  .on('request', (req, res) => {
    const parser = new Parser();
    const streamer = new Streamer();
    const filter = new Filter({ filter: () => true });

    req
      .pipe(parser)
      .pipe(streamer)
      .pipe(filter)
   // .pipe(res)
  })
  .listen(process.env.PORT || 4000);

StreamObject with jsonStreaming: true, startObject never triggered

I'm trying to stream a json file with this format

// start of file
{key: 1, key2: "value"}
{key: 4, key2: "value4"}
// end of file

Where each line is supposed to be a separate object.

    const parser = streamObject.withParser({jsonStreaming: true});
    parser.on('startObject', () => console.log('starting a new object'));
    parser.on('data', (data) => console.log('data', data));
    parser.on('end', () => resolve);
    s3.getObject(params).createReadStream().pipe(parser);

the "startObject" event is never triggered however.

Implement JSON Streaming

Implement JSON streaming as mentioned in #17.

The change will touch a parser.

Possible complications to consider:

  • Right now the project can use 3 different parser, and all of them should be updated, or verified that they do not try to consume symbols as soon as a valid object is finished.
    • Should we use just one in future versions?
  • It may require to rethink the API (events sent at different stages of parsing).
  • It may require to change an error processing, and how errors are produced.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.