Coder Social home page Coder Social logo

pietercolpaert / hardf Goto Github PK

View Code? Open in Web Editor NEW
35.0 6.0 7.0 243 KB

Low level PHP library for RDF1.1 based on N3.js

Home Page: https://packagist.org/packages/pietercolpaert/hardf

License: MIT License

PHP 100.00%
rdf linked-data php-library triples turtle trig nquads ntriples n-triples

hardf's Introduction

The hardf turtle, n-triples, n-quads, TriG and N3 parser for PHP

hardf is a PHP 7.1+ library that lets you handle Linked Data (RDF). It offers:

Both the parser as the serializer have streaming support.

This library is a port of N3.js to PHP

Triple Representation

We use the triple representation in PHP ported from NodeJS N3.js library. Check https://github.com/rdfjs/N3.js/tree/v0.10.0#triple-representation for more information

On purpose, we focused on performance, and not on developer friendliness. We have thus implemented this triple representation using associative arrays rather than PHP object. Thus, the same that holds for N3.js, is now an array. E.g.:

<?php
$triple = [
    'subject' =>   'http://example.org/cartoons#Tom',
    'predicate' => 'http://www.w3.org/1999/02/22-rdf-syntax-ns#type',
    'object' =>    'http://example.org/cartoons#Cat',
    'graph' =>     'http://example.org/mycartoon', #optional
    ];

Encode literals as follows (similar to N3.js)

'"Tom"@en-gb' // lowercase language
'"1"^^http://www.w3.org/2001/XMLSchema#integer' // no angular brackets <>

Library functions

Install this library using composer:

composer require pietercolpaert/hardf

Writing

use pietercolpaert\hardf\TriGWriter;

A class that should be instantiated and can write TriG or Turtle

Example use:

$writer = new TriGWriter([
    "prefixes" => [
        "schema" =>"http://schema.org/",
        "dct" =>"http://purl.org/dc/terms/",
        "geo" =>"http://www.w3.org/2003/01/geo/wgs84_pos#",
        "rdf" => "http://www.w3.org/1999/02/22-rdf-syntax-ns#",
        "rdfs"=> "http://www.w3.org/2000/01/rdf-schema#"
        ],
    "format" => "n-quads" //Other possible values: n-quads, trig or turtle
]);

$writer->addPrefix("ex","http://example.org/");
$writer->addTriple("schema:Person","dct:title","\"Person\"@en","http://example.org/#test");
$writer->addTriple("schema:Person","schema:label","\"Person\"@en","http://example.org/#test");
$writer->addTriple("ex:1","dct:title","\"Person1\"@en","http://example.org/#test");
$writer->addTriple("ex:1","http://www.w3.org/1999/02/22-rdf-syntax-ns#type","schema:Person","http://example.org/#test");
$writer->addTriple("ex:2","dct:title","\"Person2\"@en","http://example.org/#test");
$writer->addTriple("schema:Person","dct:title","\"Person\"@en","http://example.org/#test2");
echo $writer->end();

All methods

//The method names should speak for themselves:
$writer = new TriGWriter(["prefixes": [ /* ... */]]);
$writer->addTriple($subject, $predicate, $object, $graphl);
$writer->addTriples($triples);
$writer->addPrefix($prefix, $iri);
$writer->addPrefixes($prefixes);
//Creates blank node($predicate and/or $object are optional)
$writer->blank($predicate, $object);
//Creates rdf:list with $elements
$list = $writer->addList($elements);

//Returns the current output it is already able to create and clear the internal memory use (useful for streaming)
$out .= $writer->read();
//Alternatively, you can listen for new chunks through a callback:
$writer->setReadCallback(function ($output) { echo $output });

//Call this at the end. The return value will be the full triple output, or the rest of the output such as closing dots and brackets, unless a callback was set.
$out .= $writer->end();
//OR
$writer->end();

Parsing

Next to TriG, the TriGParser class also parses Turtle, N-Triples, N-Quads and the W3C Team Submission N3

All methods

$parser = new TriGParser($options, $tripleCallback, $prefixCallback);
$parser->setTripleCallback($function);
$parser->setPrefixCallback($function);
$parser->parse($input, $tripleCallback, $prefixCallback);
$parser->parseChunk($input);
$parser->end();

Basic examples for small files

Using return values and passing these to a writer:

use pietercolpaert\hardf\TriGParser;
use pietercolpaert\hardf\TriGWriter;
$parser = new TriGParser(["format" => "n-quads"]); //also parser n-triples, n3, turtle and trig. Format is optional
$writer = new TriGWriter();
$triples = $parser->parse("<A> <B> <C> <G> .");
$writer->addTriples($triples);
echo $writer->end();

Using callbacks and passing these to a writer:

$parser = new TriGParser();
$writer = new TriGWriter(["format"=>"trig"]);
$parser->parse("<http://A> <https://B> <http://C> <http://G> . <A2> <https://B2> <http://C2> <http://G3> .", function ($e, $triple) use ($writer) {
    if (!isset($e) && isset($triple)) {
        $writer->addTriple($triple);
        echo $writer->read(); //write out what we have so far
    } else if (!isset($triple))      // flags the end of the file
        echo $writer->end();  //write the end
    else
        echo "Error occured: " . $e;
});

Example using chunks and keeping prefixes

When you need to parse a large file, you will need to parse only chunks and already process them. You can do that as follows:

$writer = new TriGWriter(["format"=>"n-quads"]);
$tripleCallback = function ($error, $triple) use ($writer) {
    if (isset($error))
        throw $error;
    else if (isset($triple)) {
        $writer->write();
        echo $writer->read();
    else if (isset($error)) {
        throw $error;
    } else {
        echo $writer->end();
    }
};
$prefixCallback = function ($prefix, $iri) use (&$writer) {
    $writer->addPrefix($prefix, $iri);
};
$parser = new TriGParser(["format" => "trig"], $tripleCallback, $prefixCallback);
$parser->parseChunk($chunk);
$parser->parseChunk($chunk);
$parser->parseChunk($chunk);
$parser->end(); //Needs to be called

Parser options

  • format input format (case-insensitive)
    • if not provided or not matching any options below, then any Turtle, TriG, N-Triples or N-Quads input can be parsed (but NOT the N3)
    • turtle - Turtle
    • trig - TriG
    • contains triple, e.g. triple, ntriples, N-Triples - N-Triples
    • contains quad, e.g. quad, nquads, N-Quads - N-Quads
    • contains n3, e.g. n3 - N3
  • blankNodePrefix (defaults to b0_) prefix forced on blank node names, e.g. TriGWriter(["blankNodePrefix" => 'foo']) will parse _:bar as _:foobar.
  • documentIRI sets the base URI used to resolve relative URIs (not applicable if format indicates n-triples or n-quads)
  • lexer allows usage of own lexer class. A lexer must provide following public methods:
    • tokenize(string $input, bool $finalize = true): array<array{'subject': string, 'predicate': string, 'object': string, 'graph': string}>
    • tokenizeChunk(string $input): array<array{'subject': string, 'predicate': string, 'object': string, 'graph': string}>
    • end(): array<array{'subject': string, 'predicate': string, 'object': string, 'graph': string}>
  • explicitQuantifiers - [...]

Empty document base IRI

Some Turtle and N3 documents may use relative-to-the-base-IRI IRI syntax (see here and here), e.g.

<> <someProperty> "some value" .

To properly parse such documents the document base IRI must be known. Otherwise we might end up with empty IRIs (e.g. for the subject in the example above).

Sometimes the base IRI is encoded in the document, e.g.

@base <http://some.base/iri/> .
<> <someProperty> "some value" .

but sometimes it is missing. In such a case the Turtle specification requires us to follow section 5.1.1 of the RFC3986 which says that if the base IRI is not encapsulated in the document, it should be assumed to be the document retrieval URI (e.g. the URL you downloaded the document from or a file path converted to an URL). Unfortunatelly this can not be guessed by the hardf parser and has to be provided by you using the documentIRI parser creation option, e.g.

parser = new TriGParser(["documentIRI" => "http://some.base/iri/"]);

Long story short if you run into the subject/predicate/object on line X can not be parsed without knowing the the document base IRI.(...) error, please initialize the parser with the documentIRI option.

Utility

use pietercolpaert\hardf\Util;

A static class with a couple of helpful functions for handling our specific triple representation. It will help you to create and evaluate literals, IRIs, and expand prefixes.

$bool = isIRI($term);
$bool = isLiteral($term);
$bool = isBlank($term);
$bool = isDefaultGraph($term);
$bool = inDefaultGraph($triple);
$value = getLiteralValue($literal);
$literalType = getLiteralType($literal);
$lang = getLiteralLanguage($literal);
$bool = isPrefixedName($term);
$expanded = expandPrefixedName($prefixedName, $prefixes);
$iri = createIRI($iri);
$literalObject = createLiteral($value, $modifier = null);

See the documentation at https://github.com/RubenVerborgh/N3.js#utility for more information.

Two executables

We also offer 2 simple tools in bin/ as an example implementation: one validator and one translator. Try for example:

curl -H "accept: application/trig" http://fragments.dbpedia.org/2015/en | php bin/validator.php trig
curl -H "accept: application/trig" http://fragments.dbpedia.org/2015/en | php bin/convert.php trig n-triples

Performance

We compared the performance on two turtle files, and parsed it with the EasyRDF library in PHP, the N3.js library for NodeJS and with Hardf. These were the results:

#triples framework time (ms) memory (MB)
1,866 Hardf without opcache 27.6 0.722
1,866 Hardf with opcache 24.5 0.380
1,866 EasyRDF without opcache 5,166.5 2.772
1,866 EasyRDF with opcache 5,176.2 2.421
1,866 ARC2 with opcache 71.9 1.966
1,866 N3.js 24.0 28.xxx
3,896,560 Hardf without opcache 40,017.7 0.722
3,896,560 Hardf with opcache 33,155.3 0.380
3,896,560 N3.js 7,004.0 59.xxx
3,896,560 ARC2 with opcache 203,152.6 3,570.808

License, status and contributions

The hardf library is copyrighted by Ruben Verborgh and Pieter Colpaert and released under the MIT License.

Contributions are welcome, and bug reports or pull requests are always helpful. If you plan to implement a larger feature, it's best to discuss this first by filing an issue.

hardf's People

Contributors

gplanchat avatar k00ni avatar pietercolpaert avatar scor avatar zozlak avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

hardf's Issues

TriGParser/N3Lexer fails when certain control / Unicode characters appear in string (e.g. \xEF\xBB\xBF)

TriGParser / N3Lexer runs into an exception if a file contains certain control or Unicode characters. I couldn't find a way to display these characters in an editor (VSCode, gedit,...), so when you open it, don't wonder that there are no such characters before @prefix statements.

The stack trace is:

Fatal error: Uncaught Exception: Unexpected "@prefix" on line 1. 
in /govi/scripts/vendor/pietercolpaert/hardf/src/N3Lexer.php:456

Stack trace:
#0 /..-/hardf/src/N3Lexer.php(109): pietercolpaert\hardf\N3Lexer->syntaxError('\xEF\xBB\xBF@prefix', 1)
#1 /.../hardf/src/N3Lexer.php(408): pietercolpaert\hardf\N3Lexer->pietercolpaert\hardf\{closure}(Object(pietercolpaert\hardf\N3Lexer))
#2 /.../hardf/src/N3Lexer.php(470): pietercolpaert\hardf\N3Lexer->tokenizeToEnd(Object(Closure), false)
#3 /.../hardf/src/N3Lexer.php(491): pietercolpaert\hardf\N3Lexer->pietercolpaert\hardf\{closure}('\xEF\xBB\xBF@prefix owl:...', false)
#4 /.../hardf/src/TriGParser.php(1183): pietercolpaert\hardf\N3Lexer->tokenize('\xEF\xBB\xBF@prefix owl:...', false)

#5 /govi/scripts/vendor/sweetrdf/quick-rdf-io/src/quickRdfIo/TriGParser.php(159): 
    pietercolpaert\hardf\TriGParser->parseChunk('\xEF\xBB\xBF@prefix owl:...')

(note: quickRdfIo was used)

Here is a prepared failing test:

https://github.com/pietercolpaert/hardf/blob/bug/parser-fails-control-or-unicode-characters/test/TriGParserTest.php#L2110-L2113

And here is the related N3 file:

https://lov.linkeddata.es/dataset/lov/vocabs/identity/versions/2014-04-03.n3

How to utilize chunk based file parsing?

README contains the following code

$writer = new TriGWriter(["format"=>"n-quads"]);
$tripleCallback = function ($error, $triple) use ($writer) {
    if (isset($error))
        throw $error;
    else if (isset($triple)) {
        $writer->write();
        echo $writer->read();
    else if (isset($error)) {
        throw $error;
    } else {
        echo $writer->end();
    }
};
$prefixCallback = function ($prefix, $iri) use (&$writer) {
    $writer->addPrefix($prefix, $iri);
};
$parser = new TriGParser(["format" => "trig"], $tripleCallback, $prefixCallback);
$parser->parseChunk($chunk);
$parser->parseChunk($chunk);
$parser->parseChunk($chunk);
$parser->end(); //Needs to be called

But how to get $chunk, which is not defined this example? Is it provided by hardf or do i have to get that by myself?

Can you please provide me an example, how to read a RDF file (e.g. n-triples) using chunks? Thanks in advance.

Limitations and usage

We are looking for possible replacements of easyrdf and hardf was the first one that we came across. However, upon deep testing, while the performance is way faster than easyrdf (from sweetrdf now), not all edge cases are being covered.

What is concerning us is the N-Triples format. The TriGWriter is escaping a limited amount of characters - they need to match

    const ESCAPE = '/["\\\\\\t\\n\\r\\b\\f]/';

and they are replaced by

$this->escapeReplacements = [
            '\\' => '\\\\', '"' => '\\"', "\t" => '\\t',
            "\n" => '\\n', "\r" => '\\r', \chr(8) => '\\b', "\f" => '\\f',
        ];

However, this leaves a huge list of characters that can make it missbehave.
According to https://www.w3.org/TR/rdf-testcases/#ntrip_strings, many other characters need escaping.

I created a small script that tests just the 255 first characters and the results are not looking good. The script is below

<?php

use pietercolpaert\hardf\TriGWriter;

include 'vendor/autoload.php';

// Generate a string of 10000 random characters including some unicode
// characters.
$randomString = '';
for ($i = 0; $i < 255; $i++) {
    $randomString .= chr($i);
}

// Test the performance of the Ntriples::escapeString() method.
$start = microtime(true);
$serializer = new TriGWriter(['format' => 'triple']);
$serializer->addTriple('http://hardf.org/subject', 'http://hardf.org/predicate', '"' . $randomString . '"');
$output = $serializer->end();
$end = microtime(true);

echo 'TriGWriter::end took ' . ($end - $start) . ' seconds. Memory usage: ' . memory_get_usage() . ' bytes. Memory peak usage: ' . memory_get_peak_usage() . ' bytes.' . PHP_EOL;

$endpoint = 'http://host.docker.internal:8890/sparql';
$graph = 'http://testing.com/';
$subject = 'http://hardf.org/subject';
$predicate = 'http://hardf.org/predicate';
$object = '"' . $randomString . '"';
$query = 'WITH <' . $graph . '> DELETE { <' . $subject . '> <' . $predicate . '> ?d } INSERT { ' . $output . ' } WHERE { <' . $subject . '> <' . $predicate . '> ?d }';
echo $query . PHP_EOL;

$start = microtime(true);
$ch = curl_init();
curl_setopt($ch, CURLOPT_URL, $endpoint);
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $query);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_HTTPHEADER, ['Content-Type: application/sparql-update']);
$response = curl_exec($ch);

if ($response === false) {
    echo "Error: " . curl_error($ch);
}
else {
    echo "Response: " . $response;
}

curl_close($ch);
$end = microtime(true);

echo 'SPARQL INSERT took ' . ($end - $start) . ' seconds. Memory usage: ' . memory_get_usage() . ' bytes. Memory peak usage: ' . memory_get_peak_usage() . ' bytes.' . PHP_EOL;

I was able to support all 127 initial characters by altering the escape pattern to

    const ESCAPE = '/["\\0\\\\\\t\\n\\r\\b\\f]/';

and the replacements array to

$this->escapeReplacements = [
            '\\' => '\\\\', '"' => '\\"', "\t" => '\\t',
            "\n" => '\\n', "\r" => '\\r', \chr(8) => '\\b', "\f" => '\\f',
           \chr(0) => '\\u0000',
        ];

(mainly supported \0 which is the end of stream if I am not mistaken).

However, as soon as I get to a couple of characters after 128, the inserted string is "0" or failing. With easyrdf, it takes way WAY more time to insert the data for large blobs of text, but you know, performance costing integrity is not really performance.

Our use case is that we have this CMS that the user has a WYSIWYG editor where they can paste whatever, meaning that a wrong copy/paste can cause one of these characters to be printed. But in case it is an intended character, we would want to avoid removing it.

My question is, are we missusing this library? Are there known/unknown limits to it? Or an intended philosophy to not consider non-printable/special characters as part of the supported string?

Lack of documentation on TriGParser options

I searched in the README and in the comments inside the code but haven't found any systematic documentation of the TriGParser constructor options.

  • There are examples in the README providing two sample format option values but it's not stated what are other possible values.
  • By looking in the source code I can tell there are a few other options: documentIRI, blankNodePrefix, lexer and explicitQuantifiers but it's unclear what they do and how they can/should be used.

It will make it much easier to use the library if more extensive API documentation is provided.

Parsing n-triples is faulty => Exception: Unexpected "typeIRI" on line 1.

hardf seems to have a hard time parsing triples like:

<http://example.org/show/218> <http://www.w3.org/2000/01/rdf-schema#label> "That Seventies Show"^^<http://www.w3.org/2001/XMLSchema#string> .
<http://example.org/show/218> <http://www.w3.org/2000/01/rdf-schema#label> "That Seventies Show" .
<http://example.org/show/218> <http://example.org/show/localName> "That Seventies Show"@en .
<http://example.org/show/218> <http://example.org/show/localName> "Cette Série des Années Septante"@fr-be .
<http://example.org/#spiderman> <http://example.org/text> "Multi-line\nliteral with many quotes (\"\"\"\"\")\nand two apostrophes ('')." .
<http://en.wikipedia.org/wiki/Helium> <http://example.org/elements/atomicNumber> "2"^^<http://www.w3.org/2001/XMLSchema#integer> .
<http://en.wikipedia.org/wiki/Helium> <http://example.org/elements/specificGravity> "1.663E-4"^^<http://www.w3.org/2001/XMLSchema#double> .

Source: https://www.w3.org/TR/n-triples/#sec-literals (removed the comments)

I always get the following error:

Exception: Unexpected "typeIRI" on line 1.

/app/src/Saft/Addition/hardf/vendor/pietercolpaert/hardf/src/N3Lexer.php:411
/app/src/Saft/Addition/hardf/vendor/pietercolpaert/hardf/src/N3Lexer.php:41
/app/src/Saft/Addition/hardf/vendor/pietercolpaert/hardf/src/N3Lexer.php:441
/app/src/Saft/Addition/hardf/vendor/pietercolpaert/hardf/src/TriGParser.php:986
/app/src/Saft/Addition/hardf/vendor/pietercolpaert/hardf/src/TriGParser.php:969

Test setup

$parser = new TriGParser([
    'format' => 'triple'
]);

$str = '<http://example.org/show/218> <http://www.w3.org/2000/01/rdf-schema#label> "That Seventies Show"^^<http://www.w3.org/2001/XMLSchema#string> .
';
$parser->parse($str);
// $parser->parseChunk($str); also fails

Any idea? @pietercolpaert

Util::getLiteralValue() fails for multiline values

The regullar expression in https://github.com/pietercolpaert/hardf/blob/master/src/Util.php#L64 doesn't match multiline values, e.g.

pietercolpaert\hardf\Util::getLiteralValue('"' . "foo\nbar" . '"');

throws an error instead of returning

foo
bar

I think the fix is to add m and s flags (see here) to the regex making https://github.com/pietercolpaert/hardf/blob/master/src/Util.php#L64:

preg_match('/^"(.*)"/sm', $literal, $match); //TODO: somehow the copied regex did not work. To be checked. Contained [^]

Roadmap for hardf?

I would like to know what roadmap for hardf you have?

In my opinion, you are currently the only RDF parsing and serialization solution, which is not only up to date, but works with PHP 7.x and is actively maintained. EasyRdf, some kind of a state-of-the-art RDF lib for PHP, is currently only in maintenance mode. Same for ARC2 and Erfurt, but with problems e.g. running in PHP 7.

I am asking, because i would like to use your library extensively for our Saft library. Saft in a nutshell: integration layer to enable the usage of different RDF libraries. We currently support ARC2, EasyRdf and Erfurt to some extend, but none fully. Using Saft, you can parse a file with EasyRdf and store the triples into a ARC2 backend, for instance.

Unicode with prefix fails to pass

Our current prefixed regex /^((?:[A-Za-z\xc0-\xd6\xd8-\xf6])(?:\.?[\-0-9A-Z_a-z\xb7\xc0-\xd6\xd8-\xf6])*)?:((?:(?:[0-:A-Z_a-z\xc0-\xd6\xd8-\xf6]|%[0-9a-fA-F]{2}|\\[!#-\/;=?\-@_~])(?:(?:[\.\-0-:A-Z_a-z\xb7\xc0-\xd6\xd8-\xf6]|%[0-9a-fA-F]{2}|\\[!#-\/;=?\-@_~])*(?:[\-0-:A-Z_a-z\xb7\xc0-\xd6\xd8-\xf6]|%[0-9a-fA-F]{2}|\\[!#-\/;=?\-@_~]))?)?)(?:[ \t]+|(?=\.?[,;!\^\s#()\[\]\{\}"'<]))/ does not match valid TriG entities like c:テスト.

This prefixed regex is defined in the N3 lexer on line 68: https://github.com/pietercolpaert/hardf/blob/master/src/N3Lexer.php#L68

The reason why I had to simplify the regex is because PHP does not allow unicode escape sequences in PCRE regular expressions... Are there any alternatives?

Original issue was found by Kanzaki Masahide:

I found that TriGParser fails to handle non-ASCII prefixed names, e.g.

@prefix c: <http://example.org/>.
c:test a c:テスト .

While it's OK to parse IRI :

@prefix c: <http://example.org/>.
c:test a <http://example.org/テスト> .

Note N3.js can parse both properly.

blank nodes in ntriples format aren't parsed correctly

$parser = new pietercolpaert\hardf\TriGParser(['format' => 'application/ntriples']);
$parser->parse('_:r1 <https://foo.bar> "baz".');

throws an Unexpected "blank" on line 1. error while

  • Blank node is valid in the ntriples format (see the w3c specification).
  • Same code with an URI works:
    $parser = new pietercolpaert\hardf\TriGParser(['format' => 'application/ntriples']);
    $parser->parse('<https://foo.bar> <https://foo.bar> "baz".');

This might be simply an issue with specifying the right format but I didn't find the list of formats to be used for particular RDF serializations.

results table

Having the same memory footprint for 2K triples and 4M triples seems strange. How is it possible?

Turtle parser fails for unknown reason (SPARQL 1.1 Syntax Update 1 manifest.ttl)

@pietercolpaert When parsing turtle file https://www.w3.org/2009/sparql/docs/tests/data-sparql11/syntax-update-1/manifest parser returns 0 triples.

ARC2's Turtle parser has no problems and returns all triples, so file should be correct.

I made a test which shows the problem: https://github.com/pietercolpaert/hardf/blob/error/turtle-parser-fails-unknown/test/TriGParserTest.php#L2061-L2072 (branch error/turtle-parser-fails-unknown)

Any idea why?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.