Coder Social home page Coder Social logo

mholt / papaparse Goto Github PK

View Code? Open in Web Editor NEW
12.3K 155.0 1.1K 14.22 MB

Fast and powerful CSV (delimited text) parser that gracefully handles large files and malformed input

Home Page: http://PapaParse.com

License: MIT License

JavaScript 90.34% CSS 7.53% HTML 2.12%
csv javascript csv-parser

papaparse's Introduction

Parse CSV with JavaScript

Papa Parse is the fastest in-browser CSV (or delimited text) parser for JavaScript. It is reliable and correct according to RFC 4180, and it comes with these features:

  • Easy to use
  • Parse CSV files directly (local or over the network)
  • Fast mode
  • Stream large files (even via HTTP)
  • Reverse parsing (converts JSON to CSV)
  • Auto-detect delimiter
  • Worker threads to keep your web page reactive
  • Header row support
  • Pause, resume, abort
  • Can convert numbers and booleans to their types
  • Optional jQuery integration to get files from <input type="file"> elements
  • One of the only parsers that correctly handles line-breaks and quotations

Papa Parse has no dependencies - not even jQuery.

Install

papaparse is available on npm. It can be installed with the following command:

npm install papaparse

If you don't want to use npm, papaparse.min.js can be downloaded to your project source.

Usage

import Papa from 'papaparse';

Papa.parse(file, config);
    
const csv = Papa.unparse(data[, config]);

Homepage & Demo

To learn how to use Papa Parse:

The website is hosted on Github Pages. Its content is also included in the docs folder of this repository. If you want to contribute on it just clone the master of this repository and open a pull request.

Papa Parse for Node

Papa Parse can parse a Readable Stream instead of a File when used in Node.js environments (in addition to plain strings). In this mode, encoding must, if specified, be a Node-supported character encoding. The Papa.LocalChunkSize, Papa.RemoteChunkSize , download, withCredentials and worker config options are unavailable.

Papa Parse can also parse in a node streaming style which makes .pipe available. Simply pipe the Readable Stream to the stream returned from Papa.parse(Papa.NODE_STREAM_INPUT, options). The Papa.LocalChunkSize, Papa.RemoteChunkSize , download, withCredentials, worker, step, and complete config options are unavailable. To register a callback with the stream to process data, use the data event like so: stream.on('data', callback) and to signal the end of stream, use the 'end' event like so: stream.on('end', callback).

Get Started

For usage instructions, see the homepage and, for more detail, the documentation.

Tests

Papa Parse is under test. Download this repository, run npm install, then npm test to run the tests.

Contributing

To discuss a new feature or ask a question, open an issue. To fix a bug, submit a pull request to be credited with the contributors! Remember, a pull request, with test, is best. You may also discuss on Twitter with #PapaParse or directly to me, @mholt6.

If you contribute a patch, ensure the tests suite is running correctly. We run continuous integration on each pull request and will not accept a patch that breaks the tests.

papaparse's People

Contributors

adiroiban avatar billstron avatar bluej100 avatar chalker avatar dubzzz avatar edg2s avatar efossas avatar firstvertex avatar gabegorelick avatar hacknlove avatar iceonfire avatar janisdd avatar jscheid avatar jsg2021 avatar mholt avatar monkeydzeke avatar nikolas avatar nlassaux avatar oleersoy avatar p4ul avatar pokoli avatar prayashm avatar pschlatt avatar robd avatar seachicken avatar theill avatar tijszwinkels avatar trevorharwell avatar turbo87 avatar varunsharma27 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

papaparse's Issues

Ability to return meta information when aborting from the 'before' hook

Just wondering if it would make sense to be able to return some meta information, when aborting the process from the before hook, that will be passed in to the error hook.

I guess the reason I am asking is that I am doing a couple of checks in the before hook and if I abort I want to be able to differentiate between the reasons why the process was aborted. Currently, when you return false from the before hook you simply have an error object with a name property of AbortError. But there is no way to differentiate what it was that caused the error.

Is it possible, and does it make sense, to be able to pass some more information along with aborting the process? Maybe return an error object or something instead of false.

Thoughts?

/cc @mholt

The demo doesn't work on Chrome 32 and FF 27

I downloaded the demo file - normal.csv. I selected the file using the Choose files button. Clicking Parse shows an error in the Ouput - Error loading normal.csv: TypeMismatchError Thats the error I get in Chrome 32.

If I try the same steps in FF 27, the Output doesn't show anything. The Firebug console however shows the error

Reference error: event is not defined -- main.js line 74

Am I missing something? The OS is Windows 7 x64. I have "unblocked" the normal.csv file before choosing it to be parsed. The file was downloaded to the "Downloads" folder for my user account. I have also tried doing this with csv files I created on my own. Same errors.

International (support other encodings)

Hi,
I need parse CSV, which is in cp1250 codepage. There is no option for that.
You have in code:
reader.readAsText(f.file);
but I need some option for
reader.readAsText(f.file,'CP1250');
as parameter in config should be fine.
Thanks

Changed column position when parsing

While testing some of the test csv files I noticed some strange results. Here is test csv to replicate this.

"a1","b2","c1","d1","41","0"
"a2","b2","c2","d2","42","1"

In case header is set to false. Normal in this case result is expected and order is preserved.

[ [
      "a1",
      "b2",
      "c1",
      "d1",
      "41",
      "0"
], [
      "a2",
      "b2",
      "c2",
      "d2",
      "42",
      "1"
]]

If header: true the columns with number values are moved to the begining.

Actual result:

"rows": [ {
        "0": "1",
        "41": "42",
        "a1": "a2",
        "b2": "b2",
        "c1": "c2",
        "d1": "d2"
}]

Expected result:

"rows": [ {
        "a1": "a2",
        "b2": "b2",
        "c1": "c2",
        "d1": "d2"
        "41": "42",
        "0": "1"
}]

Is this related to JavaScript or this is something papa parse is changing. Parameter dynamicTyping does not help in any case.

Special value in step function to indicate last row

Right now, when parsing a string (not talking about a file), there's no way to know when the step function is being executed on the last row. It would be really helpful to know when the input is done being parsed.

How about raising a custom event like parse.complete? Or maybe pass in a special argument with a certain value?

How to prevent dynamic typing for some columns ?

First of all, your library is awesome, and saves me a lot of time !

I use dynamic typing option to convert numeric columns, as I perform some mathematical operation with them.
But one of my columns contains a geographical code (a french administrative district code) who typically begins with a leading zero (i.e. 01 is a legacy code), and I don't want it to become 1 after conversion.

So, is there a possibility to prevent numeric conversion for some columns ?

Stream files from the network

We can stream from file input elements, but how about the network? Would the server hosting the file have to support the Range HTTP header?

This could be awesome...

Is this a bug? text >= settings.chunkSize

I feel like this line in blobLoaded is a bug:

if (text >= settings.chunkSize)

Because text is declared earlier as var text = partialLine + event.target.result; and is later passed into parse() -- in other words, text is a string of data to parse. But settings.chunkSize is a number...

Looking into making that line if (text.length >= settings.chunkSize) instead...

Issues with classic mac text files (cr only line ending)

I found some older csv tab delimited text files made in the classic mac (cr only, not cr +nl line endings) that fail to properly recognize new rows and new fields in the parsing.

I think it has to do with the function isBoundary(i) (isLineEnding seems to be dead code??)
as it currently only recognizes \r + \n as a boundary and not \r by itself.
Changing isBoundary(i) to just:

...
if (ch == _config.delimiter
       || ch == "\n"
       || ch == "\r")
   return true;
 else
   return false;

seems to allow \r only files to work without necessarily breaking \r + \n files too?

Change the name - it's lame

How about Papa Parse?

Or something better than "Parse" or "the jQuery parse plugin"...

(This might constitute a new GitHub repo URL too...)

Auto-detect delimiter

If a delimiter is not specified, how hard would it be to auto-detect it?

Might require a file with 2 or 3 rows at least, to guess-and-check. There'd be a slight, slight performance hit, but I think it would be very negligible...

Firefox 24.6 (Firefox ESR), No parse for you

Papa parse doesn't looks to work with firefox ESR

When I try to parse:
"before" callback is called is called
"error" callback is not called
"complete" callback is not called

This needs more testing.

not correctly parsing text string from FileReader

the parser does not parse correctly when the text string to be parsed comes directly from FileReader. The CSV file is an Outlook export file. However, if I copy the content from the same CSV file using notepad++ and paste it into the input text area of your page, it works.

Here's a sample code that I use to submit to your parser:

function handleFileSelect(evt) {
var files = evt.target.files;

var reader = new FileReader();
reader.readAsText(file[0]);  //  read first file selected
reader.onload=function(event) {
  var csv=event.target.result;
  var data=$.parse(csv,{
            delimiter:",",
            header: true,
           dynamicTyping: false
          });
    $('#contents').text(JSON.stringify(data,undefined,2));

   };
   reader.onerror = function(){ alert('Unable to read ' + file.fileName); };
}

The demo is not working properly in Firefox!

I was trying the demo in different browsers (basically in IE, Chrome & Firefox). The demo run successfully in IE and Chrome. But in Firefox, it is only displaying a message - "Parsing org.csv" in the output window but fails to parse the file (though it is displaying the correct file name in the message).

Note: I have the latest version of Firefox installed in my system.

Change "preview" to "rowLimit"?

I think "rowLimit" is less ambiguous and more self-explanatory, but "preview" does explain that we're previewing the results before we get the whole thing. Preferences?

Stream large files instead of loading the whole thing

For very large files, it would be nice not to load the whole thing into memory. A config option to stream the file could be made available to have a callback executed after reading each line and parsing it into a structure. The user would then be responsible for handling each line of the file as it is streamed. You wouldn't be able to examine the entire result "at once" but you would see one row at a time.

Memo to self, see this tutorial: http://www.html5rocks.com/en/tutorials/file/dndfiles/

"Unparse" - convert JSON to CSV

Papa has never done this because converting from JSON to CSV is actually pretty easy (just have to be careful to put quotes around fields that need them). So why not bundle it in? Typically, web apps that import CSV should also be able to export CSV.

I envision its usage being something as simple as:

var csv = Papa.unparse(arrayOfObjects);

You can pass in an array of objects (which writes out a header row as the first line) or an array of arrays (no header row). You can also specify data and fields separately if you want to be certain to have control:

var csv = Papa.unparse({
    fields: arrayOfFieldNames,
    data: arrayOfActualData
});

I've already written this and just need to integrate it with Papa's 3.0 API.

Get parser under test

As I prepare to roll out version 0.6 and eventually 1.0, I'm trying to get the CSV parser under some rigorous integration tests.

Should be coming soon.

How can I keep the quotes with the parse output?

If I want to parse a text which contains quotes, this parser omits the quotes and gives the text for ex: "abc","xyz" is parsed correctly but what if I want ""abc"","xyz" if I want inside quotes to be considered. Is there any way to do that?

Dealing with quotes within fields

Fantastic plugin, can I request an enhancement/mark a bug?

If I have quotes within a field (example below) then these are marked as errors by the parser and removed. Depending on your point of view, the parser should either ignore them or have an option to ignore them. In the example below, there are 3 fields in a tab delimited string. When parsed, the 3rd field which ideally should be parsed as a string unchanged has all of the quotes removed.

1\tClientInformation\t{"version":"12.001","deviceName":"xxxx","manufacturer":"yyyy","operatingSystemVersion":"Android 4.5"}

Run in a web worker

It looks quite possible to be able to run the parsing function in a web worker, which should prevent the tab from locking up, even on large files.

Of course, there would be significant overhead when transferring parse results to the main thread. (Only copies are made, you can't pass pointers across threads in web workers.) Maybe this is still feasible for streaming, though...

Update, 12 May 2014: Ha! I think I figured out an elegant way to do this. This is gonna be cool.
Update, 9 Jun 2014: Even better, there's transferrable objects in some browsers, which takes away the overhead of copying large amounts of data across the threads. Perfect.
Update, 11 Jun 2014: Using workers turns out to be slower in most cases, but it does leave the tab responsive, which is the goal.
Final update, 20 June 2014: There's more to this than it seems. Web workers are good when you want responsiveness above all, even speed. I don't think I'll need to use transferrable objects at all since I discovered that web workers can be sent File lists directly and use FileReaderSync to read them synchronously (which is fine, since it's a worker thread). This will drastically simplify the core Parser component since now we can just use readAsText and give it strings rather than having to deal with transferrable objects (like ArrayBuffers). This way, workers will be able to do the parsing and the only thing the main thread has to do is receive the results, in chunks (if streaming), and process them.

Suggest option to strip spaces from headers that are used to create object attributes/keys to conform to JSON

I've provided a patch file at https://gist.github.com/dmcnaugh/9776383 as an example of the enhancement I am proposing.

Some libraries/frameworks like to see JSON conforming objects (i.e. no spaces in object attributes/keys).
Some CSV files have spaces in the headings.

The simplest place to resolve this is in the CSV parser.

In my example patch I am overloading the "header" option to be either boolean (existing functionality) or string = "conform" (which I have made the default for my purposes) that will then strip spaces out of headers before they are used to create the object attributes/keys.

streaming files not returning any records

I may be missing something or have an incorrect setup, but I am doing this: (coffeescript)

$("input#files").parse
  config:
      delimiter: config.delimiter
      header: config.headers
      dynamicTyping: true
      step: (data, file, el) ->
         console.log('step: ', data);
  error: (err) ->
    console.log "parsing error: ", err

The parsed.fields (headers) are parsed correctly, and step gets executed once per row, however, parsed.rows is empty each time.

When I looked at the source, it looked like a problem in saveValue() [line 434]. I could never find out where the 'currentRow' got pushed into parsed.rows.

The code was executing in the following 'block' (ruby-term):

#### jquery.parse.js LINE 445+
if(fieldName) {
  if(_config.dynamicTyping)
     _state.fieldVal = tryParseFloat(_state.fieldVal);
  currentRow[fieldName] = _state.fieldVal;

  // WHERE DOES _state.parsed.rows get data added in this block?? or where else

}
else ...

p.s. loving the library! ever thought of adding json file support?

Delimiter autodetection picked wrong delimiter

I had a two line, 10 column tab separated file to parse. The first line was headers, on the second line one of the fields had two commas in it (a big number with , as 1000 separator). The delimiter autodetection picked comma instead of tabs, which was not my expectation.

Using Papa Parse

Hi Matt,

First of all, great Parsing Tool.

I have a question, a "n00b" cuestion, to be honest, and I hope you could help me.

Recently, I needed to parse some CSV files, comma separated values, with lines, and needed to assign them to an array. When searching on "how to" I encountered Papa, and it appears to do the job flawlessly, but I have a problem since I'm fairly new to JavaScript... I don't know how to use it.

I readed the Documentation Page on Papa's Website a hundred times by now, but whenever I try to call a Papa Function I get nothing.

The setup is the following:

  • I need to call it from another .js file (which I believe it's possible).
  • I need to parse a local file with a path. (which I believe it's possible according to your answer here: #35 (comment) )

However, I try that function and nothing happens. I do believe I'm missing some important step in the middle, but no matter how much I review your Documentation page I can't seem to solve the problem.

Can you, please, give me an example on how to call it from another JS File? Cause I, honestly, can't make it and I do like the way Papa works with files.

Once again, I'm sorry if it is too n00bish of a question.

Thanks for your time!

Trouble with file inputs

Hi,

I'm having trouble using this when I have input files. My code is really simple and I can't seem to figure out what I'm doing wrong. Does anybody have any feedback please?

sample.csv is a simple csv file with 1 header and 2 rows of values, 2 columns each.

Thanks!

<html> <head> <title>Test</title>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"&gt;&lt;/script> <script src="jquery.parse.min.js"></script>
<script>
var test = $('sample.csv').parse({
   config: {
      delimiter: ",
      header: true,
      dynamicTyping: true,
      preview: 10,
      step: undefined,
      encoding: "UTF-8"
   }
});
</script>
</head> <body> </body>

Woah, parser should only go through the input once

While investigating #10, I added some console.log every time a row of input ended (the internal endRow() function). No wonder huge files were taking a long time; a file with just 26 lines shows this output:

oops

In this case, I'm having it guess the delimiter (which makes sense; it tries 5 delimiters before choosing the best one) -- but it should only read a few rows for guessing, not all of them.

Whoops. Obviously, this is unacceptable for a so-called efficient parser... looking into this now...

Line breaks in quoted fields can break streaming (uncommon)

(Update: this was back in the 2.x days, but same bug still applies.)

The new blobLoaded function helps chunk large files by assembling lines that get split into two different chunks during the parsing process. Currently, it scans the end of the chunk for a line break and does the split and join there. But if that line break is in a quoted field and is part of the data, this could break the file.

Solution? Not necessarily efficient, but could work: look at the remaining data ("partialLine") and make sure there are an even number of quotes. Keep finding the lastIndexOf the line break, going backwards, until there are an even number of quotes.

We might also do well to limit the number of characters we search through with lastIndexOf.

By the way, this situation is very uncommon...

Configurable max number of errors

The user should be able to cap the number of errors (default no cap). If that many errors are discovered, then parsing aborts at that point...

Should the user also be able to configure if they want the errors to be grouped by row (how it currently is done) or just appended to a flat array (easier to iterate)?

Rookie Question...accessing field value of parsed data

I'm just learning javascript and am trying to figure out how to retrieve a series of values. Ultimately, i'm trying to make a graph of some logfile data. (software license useage). This is not an issue with the library, but I didn't know where else to post the question.
My csv file looks like this, (it has mulitple rows of data, all the same, just one row shown for example).

Date,Event,Computer,User,Product,Type,Total,Used
2/2/2014 11:58:15 AM,License returned,WS-MyComputer.olinptr.com,SomeUser,SomeApplication,Release,6,2

I am able to get the inital parsing done with the following code,

 $.ajax({
        url: "20140201.csv",
        dataType: 'text',
        cache: false
 }).done(function(csvString){
        //var data = $.parse(csvString);
         //console.log("Row data", data.results);
         var results = $.parse(csvString, {
            delimiter: ",",
            header: true,
            preview: 10,
            dynamicTyping: false,
            step: function(data, file, inputElem) {
                    console.log(data.results);
            }
        });
    });

the console log returns this:

-Object {fields: Array[8], rows: Array[1]}
fields: Array[8]
0: "Date"
1: "Event"
2: "Computer"
3: "User"
4: "Product"
5: "Type"
6: "Total"
7: "Used"
length: 8
__proto__: Array[0]

The first thing I am trying to learn is just how to return the values in the "User" field.
The ultimate goal is to aggregate the log files, (as the data is over time, multiple entries per day, each csv file recording useage by month), then take that data and feed it to highcharts.

But before all that, could you help me understand how to access a field value in an array? Any assistance would be greatly appreciated.

not parsing rows that do not have value in some fields

when a value is missing in any field of a row, that row is not returned or is dropped. Absent values in fields are very common. Example exporting contacts from outlook into a csv file, most often, there are empty fields.

What browsers is papaparse compatible with?

I couldn't find anything about browser compatibility. My main concern is IE 8, but I also need to support more up-to-date versions of Firefox and Chrome.

Will papaparse work with IE 8?

Attach to file input elements to parse files

Right now the parser only accepts a string as input. Version 1.0 of the plugin should be able to bind to <input type="file"> elements and parse a file while the FileReader gets abstracted away. The plugin could then be used two ways:

// LOAD AND PARSE A FILE:
// The input element should already have a file chosen.
// Just pass in one argument.
$('#fileInputElement').parse({
    before: function(file, inputElem)
    {
        // Optional.
        // Executed before parsing a file begins.
        // This gives you a chance to inspect the
        // metadata of the file that will be loaded and parsed.
    },
    error: function(err [, file [, inputElem ] ])
    {
        // Optional.
        // Called if the FileReader fails to open the file.
        // Returned object has a DOMError interface, so err.name gives you the error message.
    },
    complete: function(results [, file [, inputElem [, event ] ] ])
    {
        // Required (to get the results of parsing).
        // This callback is executed after parsing completes.
        // It gives you the results.
    },
    config: {
        // Optional.
        // Specify: delimiter, header, and/or dynamicTyping.
    }
});


// PARSE A STRING:
// Just like it is now. Pass in two arguments:
// the string, and the parsing options.
results = $.parse(csvString, {
    delimiter: ",",
    header: true,
    dynamicTyping: true
});

Under the hood, the first way of calling parse would call the second way once the string has been loaded.

This will support multiple files and multiple file input elements.

The underlying Parser wouldn't need to use jQuery but the plugin can be used with a DOM element (or without).

Support parsing CSV input with comments

CSV with commented lines is non-standard, but appears to be somewhat common in academic or scientific settings. So.... might as well add that in. (Thanks to Rune for the enhancement in #51)

Slow parsing in newest version

Something has caused 2.1.4 to take much longer to parse.

I was using the newest version (2.1.4) and the parsing for a 23kb file was taking extremely long. I tried the demo on your page and noticed a much faster load time and noticed that the version was 2.1.3. So I copied that minified js on my site and it has been parsing much faster.

Some insight, when I added a console.log('for loops') inside a for loop on line 251, the output is:
96429 for loops parse.js?body=1:254
23228 for loops parse.js?body=1:254

Uncaught TypeError when first line of file is empty

I'm setting 'delimiter' to ',' and 'dynamicTyping' to false. When uploading a file with a blank first line, rather than calling the error function, papa parse is throwing an exception:

Uncaught TypeError: Cannot call method 'push' of undefined

This is thrown on line 439 of the unminified jquery.parse.js

_state.parsed.fields.push(_state.fieldVal)

Group errors by row instead of as an array

It might also be nice if we did this and the errors were grouped by row, so as the user reads the parsed data after parsing is completed, a simple check can be done in errors to see if errors[lineNumber] (an array) has any errors in it. Right now, errors are an array instead of an object, making it difficult to find the errors on the row you're processing. It would be useful to know, when we get to row 3 let's say, if row 3 had any errors.

Right now, with errors in a flat array, it's easy to iterate, but how often will iterating all the errors actually be useful? It might be better to get errors by row rather than the "next" error.

Might look like this:

errors: {
    0: [ /* error objects */ ],
    3: [ /* the keys are row numbers */ ],
    length: 2 /* total error count */
}

Broken when used in Ember.js app.

This script seems to break when used in an Ember.js app. Ember extends the native Array prototype (http://emberjs.com/guides/configuring-ember/disabling-prototype-extensions/) which breaks quite a lot of the loops in the code.

When you use a for(var i in array) the extra prototype functions are also returned in the loop which breaks things like .length.

I tried to fix a few of the loops myself using .forEach() but I started getting Maximum Callstack Exceeded errors in chrome.

I hope that makes sense.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.