Coder Social home page Coder Social logo

gron's People

Contributors

akavel avatar alblue avatar bruceblore avatar csabahenk avatar cwarden avatar gummiboll avatar haraldnordgren avatar iamthemovie avatar jagsta avatar kseistrup avatar mattn avatar nwidger avatar orivej avatar saka1 avatar simepel avatar tomnomnom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

gron's Issues

Can't ungron strings containing both escapes and semi-colons

I was playing around with your tool on the output of gmail json, gron works fine, but I get an error "cannot parse input" if I try and pipe it back into ungron.

Narrowing it down, the problem seems to be reproducible with this example:

$ echo 'json.payload.headers[4].value = "from o1.email.codeship.io (o1.email.codeship.io. [192.254.119.116])        by mx.google.com with ESMTPS id d79si3761733ioj.86.2015.10.09.12.02.54        for \[email protected]\u003e        (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);        Fri, 09 Oct 2015 12:02:54 -0700 (PDT)";' | gron -u
Fatal (failed to parse input statements)

A little bit of digging reveals a minimal test case:

echo 'json.value = "\u003c ;"' | ./gron -u

It seems to be a combination of the character escape and the semi-colon in the string.

Let me know if I can be more helpful.

Better handling of big json files

Currently running gron on large json files is very slow. For example a 40MB file takes over a minute:

> time gron big.json > foo

real    1m28.850s
user    1m37.038s
sys 0m2.333s

My guess is it's in the sorting phase. Would it possible to avoid sorting all together? Maybe doing a streaming decode of the json would be helpful too.

At the very least it should be possible to disable sorting via command line option.

Automatically switch to stream mode for JSONL?

I recently got a multi-gigabyte JSONL file, and before committing to a downstream pipeline that would run for many minutes before producing output, I ran:

$ head -20 entries.jsonl | gron

and discovered that gron only interpreted the first line. So I ran:

$ head -20 entries.jsonl | jq -s '.'  > 20entries.json

and used this temporary file as my test data.

It was only after this that I noticed the -s option which I had probably read in --help when I first tried gron but had since forgotten. This works equally as well:

$ head -20 entries.jsonl | gron -s

I wonder if JSONL couldn’t be autodetected? At a minimum, if a filename is given and it ends with the suffix .jsonl, it should be possible to turn on stream mode. Perhaps if there’s a pattern matching /[\/=&?]jsonl\b/ in a URL, too.

Besides suffix matching, I don’t know whether it’s realistic to switch to stream mode if the first JSON object terminates, and there’s still a newline and another JSON object remaining in the input.

If this suggestion were implemented, a --no-stream option would need to be added in case somebody decided to name their files xxx.jsona, xxx.jsonb, xxx.jsonc, ... xxx.jsonl for some pathological reason, or if autodetection of streams is done by analyzing the input text.

Please do not change output strings

Please consider the following JSON:

{
  "dependencies": {
    "ssb-client": "http://localhost:8989/blobs/get/&EAaUpI+wrJM5/ly1RqZW0GAEF4PmCAmABBj7e6UIrL0=.sha256",
    "ssb-mentions": "http://localhost:8989/blobs/get/&GjuxknqKwJqHznKueFNCyIh52v1woz5PB41vqmoHfyM=.sha256"
  }
}

Seemingly, I can gron and ungron this to achieve an identical output:

$ gron < test.json
json = {};
json.dependencies = {};
json.dependencies["ssb-client"] = "http://localhost:8989/blobs/get/&EAaUpI+wrJM5/ly1RqZW0GAEF4PmCAmABBj7e6UIrL0=.sha256";
json.dependencies["ssb-mentions"] = "http://localhost:8989/blobs/get/&GjuxknqKwJqHznKueFNCyIh52v1woz5PB41vqmoHfyM=.sha256";
$ gron < test.json | ungron
{
  "dependencies": {
    "ssb-client": "http://localhost:8989/blobs/get/&EAaUpI+wrJM5/ly1RqZW0GAEF4PmCAmABBj7e6UIrL0=.sha256",
    "ssb-mentions": "http://localhost:8989/blobs/get/&GjuxknqKwJqHznKueFNCyIh52v1woz5PB41vqmoHfyM=.sha256"
  }
}

However, if stdout is not a TTY, what you get is something entirely different:

$ gron < test.json | ungron | cat   # same thing happens if you redirect to a file
{
  "dependencies": {
    "ssb-client": "http://localhost:8989/blobs/get/\u0026EAaUpI+wrJM5/ly1RqZW0GAEF4PmCAmABBj7e6UIrL0=.sha256",
    "ssb-mentions": "http://localhost:8989/blobs/get/\u0026GjuxknqKwJqHznKueFNCyIh52v1woz5PB41vqmoHfyM=.sha256"
  }
}

IMHO, gron should make identical output no matter what. Ideally, gron shouldn't try to outsmart the user and start interpreting string values or displaying them differently (even if the meaning is the same). At least provide a commandline switch so that the original strings are preserved.

Cheers.

gron won't connect to insecure https endpoints

I'm unable to connect to https endpoints with self signed certificates due to the requirement to validate the server certificate with the default transport for net/http clients.

The ability to connect to insecure endpoints would be helpful, even if arguably not a good idea.

The statement formatter is set globally

Currently the statement formatter is set globally.

It would be better to be explicit about it, but there's no obvious place for it to live at the moment. Perhaps the statements type should be converted to a struct with a formatter field.

gron should ignore ANSI colors in it's input

I was forcing colorize mode so that I would still see the color when I piped the output to grep:

$ gron -c "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | grep name
json[0].commit.author.name = "Tom Hudson";
json[0].commit.committer.name = "GitHub";

I then tried to pipe the output to ungron, and got a really unhelpful error:

$ gron -c "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | grep name | ungron
ungron failed for ``: invalid statement

gron / ungron should ignore any ANSI formatting codes that it or another tool might have generated, as long as they don't occur inside JSON strings (in which case they should probably be preserved and converted to \u00xx format for readability).

It would also be good if the error message could be improved, by giving a line number and the error character in \u00xx format if it is non printable.

Scanner error: token too long

When using gron -s, I am seeing the following error:

bufio.Scanner: token too long: %!s(MISSING)

I'm happy to submit a PR increasing the default buffer size to something you'd approve. Alternatively, I was thinking about how the buffer size could be provided as an option - I noticed there's only a single option passed in to the function now.

add line number to error messages

Very cool tool. I haven't done anything serious with it yet. I've only been playing around so far. But I did notice that error messages don't include the line number they occurred on. For large files, that could be a bit irksome.

gron

I tried a few things just to see what kind of errors I'd get. All of them made sense, but I wouldn't want to have to go hunting through a huge JSON file if I didn't have to. Line numbers in the errors would be appreciated.

--ungron

In most situations, the error message should give you enough context to figure out where it came from despite a lack of a line number. However, I did discover while I was fooling around that trying to "ungron" a line that looks like "test"; will throw ungron failed for ``: invalid statement. That doesn't give me anything go on to figure out where the problem is.

Root object other than "json"

Hello. I have a minor feature suggestion.

I'd like to have a -v/--var option that sets the name of the root output variable:

echo '[1,2]' | gron -v 'a'
a = [];
a[0] = 1;
a[1] = 2;

My use case is replacing a nested leaf with content from another json file,
so in this case I'd have a complicated var like 'json[0].key1' and replace the input line with multiple output lines.

(I can imagine just putting a compact single line chunk of json as a value, but that isn't permitted either currently. There may also be other ways to tackle this problem)

Here's a more complicated example:

$ head input.json a.json b.json
==> input.json <==
[ { "key1": "a", "key2": 123}, { "key1": "b", "key2": 345} ]

==> a.json <==
{ "id": "a", "value": [1,2] }

==> b.json <==
{ "id": "b", "value": [3,4] }

Here's a pipeline that greps the "key1" lines and replaces them with the content from another file:

$ gron input.json | perl -ple 'if(/^(.*key1) = "(.*?)";$/) { $_ = `gron $2.json | sed "s/^json/$1/"`; chomp }' | gron -u | jq -c
[{"key1":{"id":"a","value":[1,2]},"key2":123},{"key1":{"id":"b","value":[3,4]},"key2":345}]

With this suggestion it the sed could be dropped:

gron input.json | perl -ple 'if(/^(.*key1) = "(.*?)";$/) { $_ = `gron -v $1 $2.json`; chomp }' | gron -u

Thank you

Document compatibility of json filters

I use this tool all the time to automate tasks over json APIs with shellscripts. It's the quickest way to get a working jq selector. Most of the times it works, but I take it that compatibility is not full.

It would be nice to have the limitations documented in the readme.

gronning APIs where an authentication token is needed

Dear Tom,

Many thanks for this very useful package which I already sent to people around as am sure they will find it extremely useful.
Would there be any way of trying this for APIs where authentication is needed? I tried for example and I get this message:

json = {};
json.detail = "Authentication credentials were not provided.";

Best wishes,
Dimitris

preserving JSON order

Nice tool, really... But I was surprised that the --no-sort option doesn't seem to preserve the order of the keys in the input file. When I generate JSON files, I generally specify my own custom order. Working on a JSON file where, for instance, the first name always precedes the family name, makes a lot of things easier to check or modify inside a text editor.

Is it possible to make --no-sort preserve the original order? Ideally, ungroning would keep this order too.

Thanks!

Enhancement: Add option to exclude `null` from array output with `--ungron`

According to the gron documentation, "To preserve array keys, arrays are padded with null when values are missing". For example, using the test data in the Git repo:

$ gron testdata/one.json | fgrep -w 'fum' | gron --ungron
{
  "five": {
    "alpha": [
       null,
       "fum"
    ]
  }
}

Would it be possible to add a command line option to exclude that artificial null, for example:

$ gron testdata/one.json | fgrep -w 'fum' | gron --ungron --nonull
{
  "five": {
    "alpha": [
       "fum"
    ]
  }
}

Thanks!

multiple json objects in input?

Hi,
thanks for gron, looks really useful!
I tried feeding in multiple json objects but noticed only the first one is reported:

$ echo -e '{"foo": "bar"}\n{"baz": "meh"}' | ./gron 
json = {};
json.foo = "bar";

Is this something that can or will be supported?

thanks!

Ungron to ignore lines saying --

Grep can search for lines immediately preceding or following the matched line, using grep -A, grep -B or grep -C. However the output of these includes separator lines that read --. This output currently produces an error when you try to ungron the result.

Since grep is a primary use for gron, I believe it would be a useful enhancement for it to accept a common output of grep. It doesn't need to do anything with the lines, only match and exclude them.

For example:

$ gron file.json | grep -A 1 'ktoe per year'
json.resources.geothermal.production.units = "ktoe per year";
json.resources.geothermal.production.value = 2007.738525390625;
--
json.resources.hydropower.production.units = "ktoe per year";
json.resources.hydropower.production.value = 9038.80078125;
--
json.resources.wind.production.units = "ktoe per year";
json.resources.wind.production.value = 381.5000305175781;

project name

hi @tomnomnom ,
there is an other set of tools (https://salsa.debian.org/debian/xml2) which makes similar transformation on html, xml, and on csv files as gron performs on json.
i've read you have been suggested other names, still i encourage you to consider renaming or at least make an alias to eg. json2.
cheers.

gron produces output ungron can't process

It is unsafe to use a gron | ungron pipeline. The gron from gron-linux-amd64-0.5.2.tgz seems to have a 65535 byte limit on input to gron --ungron which does not match the output limit of gron:

$ perl -e 'print q({"blob":").(q(x) x 1000000000).qq("}\n)'|gron|sed 's/xx*/.../'|gron --ungron
{
  "blob": "..."
}
$ perl -e 'print q({"blob":").(q(x) x 1000000000).qq("}\n)'|gron|perl -nle 'print length'
10
1000000015
$ perl -e 'print q({"blob":").(q(x) x 65521).qq("}\n)'|gron|perl -nle 'print length'
10
65536
$ perl -e 'print q({"blob":").(q(x) x 65521).qq("}\n)'|gron|gron --ungron
failed to read input statements
$ perl -e 'print q({"blob":").(q(x) x 65520).qq("}\n)'|gron|perl -nle 'print length'
10
65535
$ perl -e 'print q({"blob":").(q(x) x 65520).qq("}\n)'|gron|gron --ungron|sed 's/xx*/.../'
{
  "blob": "..."
}

Add a flag to output line numbers of location of objects in actual JSON

I'd like to be able to output the line numbers of the location of objects in the actual JSON. This would be helpful in utilizing gron to make JSON files searchable via reference in command line text editors like VIM (see https://vi.stackexchange.com/questions/15682/is-there-a-vim-plugin-available-to-add-jsonpath-jq-jmespath-path-searching ).

I imagine it could output the line numbers inside a javascript comment so as to ensure that the output remains javascript compatible.

json = []; // line 1
json[0] = {}; // line 2
json[0].commit = {}; // line 3
json[0].commit.author = {}; // line 4
json[0].commit.author.date = "2016-07-02T10:51:21Z"; // line 5
json[0].commit.author.email = "[email protected]"; // line 6
json[0].commit.author.name = "Tom Hudson"; // line 7
...

shell-friendly output formatting

I'm super excited by this project. I've been dreaming of something like this for a while. jq is powerful but it won't do for certain use cases.

The issue: when working with classical shell tools such as e.g. cut, the current output format is a bit obnoxious.

▶ gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" | fgrep "commit.author"
json[0].commit.author = {};
json[0].commit.author.date = "2016-07-02T10:51:21Z";
json[0].commit.author.email = "[email protected]";
json[0].commit.author.name = "Tom Hudson";

The ideal would be the following:

▶ gron "https://api.github.com/repos/tomnomnom/gron/commits?per_page=1" --output-shell | fgrep "commit.author"
0.commit.author.date=2016-07-02T10:51:21Z
[email protected]
0.commit.author.name=Tom Hudson

Essentially:

  • Skip print inner nodes (such as json[0].commit.author = {};), only print leaves;
  • Don't end the line with a semicolon;
  • Don't quote strings;
  • Don't put spaces around the =;
  • Leave out the starting json;
  • Array indices as foo.0.bar instead of foo[0].bar;
  • BONUS: make the separator (=) configurable.

If you agree that this is the correct thing to do, then I would be happy to write a PR.

Array keys aren't sorted numerically

E.g from an array with lots of keys:

json.fieldDefinitions[0].validated = false;
json.fieldDefinitions[0].wholeDigits = 0; 
json.fieldDefinitions[100] = {};
json.fieldDefinitions[100].autoFill = true;

(Very) Minor issue: gron version in header

Hello!

The User-Agent-header set in getURL has an outdated version number of gron. Instead of updating it manually it might be better if it's set from a const from main or something?

Enhancement: Add recursive option

I have a deeply nested directory of archived JSON responses that I would like to analyze (extract fields from each of the response files)

find -exec gron would probably work, but the hierarchy holds important context so I'd want to have the file path included in the output.

I'm thinking something like grep -r

Less than sign "<" turns to unicode with gron tool

Hi @tomnomnom

Only just started to use the gron tool, so maybe I am missing something!

When I include a "<" as part of my json file, the "groned" data output includes the unicode (http://unicode-table.com/en/#003C) version of the character "<" (\u003c).

[david.gibbons@7V0DZY1 ~]$ gron --version
gron version 0.3.6
[david.gibbons@7V0DZY1 ~]$ 
[david.gibbons@7V0DZY1 ~]$ cat example.json 
{
    "glossary": {
        "title": "example glossary",
        "GlossDiv": {
            "title": "<S"
        }
    }
}
[david.gibbons@7V0DZY1 ~]$ gron example.json
json = {};
json.glossary = {};
json.glossary.GlossDiv = {};
json.glossary.GlossDiv.title = "\u003cS";
json.glossary.title = "example glossary";
[david.gibbons@7V0DZY1 ~]$ gron example.json | grep '<'
[david.gibbons@7V0DZY1 ~]$
[david.gibbons@7V0DZY1 ~]$ cat example.json | grep '<'
            "title": "<S"
[david.gibbons@7V0DZY1 ~]$

So then I cannot grep for "<" directly, even though it is in my json file.

Is this something that can be fixed? Thanks

gron as a library

It would be nice to have gron available as a library in order to use it directly in other go projects without using the command line tool.

Number formatting in output

Please could we get lines such as:

foo.bar.baz.id = 6.688398e+07;

as numbers without the exponent?

Thanks!

Inexplicably high memory usage

When trying to process a 3GB JSON file on a machine with 62GB of available RAM, gron terminates (after about an hour) with the message fatal error: runtime: out of memory.

Out of curiosity, I spun up an r4.4xlarge in AWS and let it chew on the file - peak memory usage approached ~98GB (of 120GB free).

test in win7 occour decode error

example.json
{
"glossary": {
"title": "example glossary",
"GlossDiv": {
"title": "<S"
}
}
}

gron example.json
[34;1mjson�[0m = �[35m{}�[0m;
[34;1mjson�[0m.�[34;1mglossary�[0m = �[35m{}�[0m;
[34;1mjson�[0m.�[34;1mglossary�[0m.�[34;1mGlossDiv�[0m = �[35m{}�[0m;
[34;1mjson�[0m.�[34;1mglossary�[0m.�[34;1mGlossDiv�[0m.�[34;1mtitle�[0m = �[33m
<S"�[0m;
[34;1mjson�[0m.�[34;1mglossary�[0m.�[34;1mtitle�[0m = �[33m"example glossary"�[
m;

Sorting results

When you need to compare two files (or parts of them) without caring about the order of values within an object, it's common to pipe the result of gron through sort. However, this is incorrect in cases where some values can span multiple lines.

For example:

json.foo = "foo";
json.bar = "lorem ipsum dolor
sit amet consequetur
adipiscing elit";

These lines would be sorted on the command line like so:

adipiscing elit";
json.bar = "lorem ipsum dolor
json.foo = "foo";
sit amet consequetur

This is obviously incorrect. The sorted output we want is:

json.bar = "lorem ipsum dolor
sit amet consequetur
adipiscing elit";
json.foo = "foo";

It would be helpful if gron had an option to reliably sort the values as it outputs them, such as a -s --sort flag.

build release with goreleaser & use dep for version mgmt

hi there

Firstly, this is an absolutely incredible tool, it's genuinely changed my life!

I wanted to try contribute in some way. I had trouble building this because it seems there's no dep management system in use, and the binaries are hand built? I could be wrong

Would you consider accepting a PR to build things with https://goreleaser.com/ ?

Also, what versioning tool do you use? Glide? Dep?

[suggestion] support YAML as well

Given that YAML is a superset of JSON, and often things that are described in YAML are completely translatable to JSON (I'm thinking specifically Kubernetes manifests but there are others), a -y option to parse yaml (especially useful if it worked with ungron) would be kinda nifty.

Of course that adds to #35 complications and #23 #28 as yaml spec specifically specifies multiple YAML docs per file, but anyway, just a thought.

Errors are opaque

Errors reported to the user don't provide much help. E.g. Fatal (failed to parse input statements)

It would be good to provide more context, like where the problem was found.

Some keys are unnecessarily quoted

For example keys containing underscores.

Currently:

{ "bar_bar": "hey" } -> json["bar_bar"] = "hey";

But should be:

{ "bar_bar": "hey" } -> json.bar_bar = "hey";

Multiple files or urls as input

This relates to multiple objects in the same input, but is a bit different.

Would be useful to allow more than one input on the command line:

gron <file|url> <file|url> ... etc

It's hard to say exactly what gron should output in this case, possibly each object field could inherit the file or url name.

e.g.

gron file1.json file2.json

file1.keyValue = "fish"
file2.keyValue = "chips"

or perhaps more simply, number the files, which would avoid file name clashes (and could also support multiple objects in streams.)

[0].keyValue = "fish"
[1].keyValue = "chips"

at the moment, I for this, which isn't quite as good:
echo *.json | xargs -n1 gron >> out.txt

Simpler syntax

Hi,

This is such a great idea that I've decided to port it to rust because I need it as a library rather than a command-line tool (BTW, let me know if you want me to rename my version of the library)

The question is why there are semicolons? All javascript interpreters allow to skip them in all the cases in this subset of a javascript. I've decided to omit them, to have less visual noise, and make it easier to use cut and awk for the output. Is there any strong reason for your choice?

overly large?

I have a json file owned by me but gron keeps posting a permission denied when trying to execute against the listed file. If I run gron to parse stdin, it works. The o2.json is one large line without an EOL. Could I be receiving an incorrect error? The copy of gron is installed on an Ubuntu 18.04 via snap. As you can see the executable(?) is just a link pointing back to snap (this is my first snap installed app, and I'm not sure this is correct). Anyway, any pointers/help would be greatly appreciated.

username@mach:/Stranded Deep/Data$ ls -al o2.json
-rw-rw-rw- 1 username group 40206 Sep 9 14:18 o2.json
username@mach:/Stranded Deep/Data$ gron o2.json
open o2.json: permission denied

:/usr/bin$ ls -al which gron
lrwxrwxrwx 1 root root 13 Sep 9 15:16 /snap/bin/gron -> /usr/bin/snap

gron in Fedora

Hi @tomnomnom, thanks for the great tool. It was a great help to me on multiple occasions.

I just wanted to let you know that I've started the process of packaging gron for Fedora. It is already accepted for the next release and I'm hoping to get it in the current release in the next days. Once it's in, users will be able to just dnf install gron.

Let me know if you have any questions. If not, feel free to simply close this issue. Thanks, again, for creating gron.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.