Coder Social home page Coder Social logo

classic-level's Issues

`linux-arm` and `linux-arm64` prebuild comes with GLIBC which is too new for Raspberry Pi

The readme lists that this should work on Raspberry Pi, but we've traced down that due to a GLIBC issue, this is not the case

Reproduced on an ARM64 laptop running Ubuntu 18:

Error: /lib/aarch64-linux-gnu/libc.so.6: version `GLIBC_2.33' not found (required by /home/cswendrowski/foundryvtt/resources/app/node_modules/classic-level/prebuilds/linux-arm64/node.napi.armv8.node)

Ubuntu 18 ships with GLIBC 2.27, Raspian and Pi OS 10 ships 2.28, and Pi OS 11 with 2.29

On an x64 machine also running Ubuntu 18, the prebundle comes compiled with an older version of GLIBC and runs correctly on 2.27

Any guidance or assistance would be appreciated

Are writes ordered?

I see that we are using napi_create_async_work. However, I can't read anywhere whether or not work items are guaranteed to be executed in order or not. If not then we might have a problem with write ordering, i.e. preceding writes overwriting following writes.

Do we need to create our own per db ordered work queue?

Seek data race

I believe we have a data race for Seek. We execute seek in the main thread but call next in the worker thread. This means we concurrently access the same state under Iterator, in particular, didSeek_ and first_. Not sure how big of a problem it is and whether or not there are further issues with this.

Specify a custom prebuilds location

My application is bundled for production use without node_modules installed. Currently classic-level looks for the binaries either in node_modules or in ./prebuilds (as far as I know). It would be nice to be able to specify a custom path upon initialization (e.g. as a config option).

Meaning of "classic"

Hi, thank you for the very nice library.
Sorry, this might be stupid question.
What does the "classic" mean?
Is it not modern? Is there "new" that should be used?
I am confused because my English is poor.

`classic-level` rebuild failed when `electron-forge package`

problem

I am using level as my local db server ,when pack my electron app ,I got this error and could not fix it .
I tried using a different version of electron ,but still the same

 make: *** No rule to make target `Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o', needed by `Release/leveldb.a'.  Stop.
  Error: `make` failed with exit code: 2
  at ChildProcess.onExit (/Users/neptune/github/xxx/app/node_modules/node-gyp/lib/build.js:203:23)
  at ChildProcess.emit (node:events:513:28)
  at ChildProcess._handle.onexit (node:internal/child_process:291:12)

An unhandled rejection has occurred inside Forge:
Error: node-gyp failed to rebuild '/private/var/folders/v6/51277h6j2258m8yvl9rknkhr0000gp/T/electron-packager/darwin-arm64/nnk-24h-live-darwin-arm64-MeCAK9/Electron.app/Contents/Resources/app/node_modules/classic-level'
at ChildProcess.<anonymous> (/Users/neptune/github/xxx/app/node_modules/@electron/rebuild/lib/module-type/node-gyp/node-gyp.js:118:24)
    at ChildProcess.emit (node:events:513:28)
    at ChildProcess._handle.onexit (node:internal/child_process:291:12)

env

node version v18.15.0
electron abi 116
electron version v25.8.1
classic-level 1.3.0

deps

├── @electron-forge/[email protected]
├── @electron-forge/[email protected]
├── @electron-forge/[email protected]
├── @electron-forge/[email protected]
├── @electron-forge/[email protected]
├── @types/[email protected]
├── @types/[email protected]
├── @types/[email protected]
├── @types/[email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
├── [email protected]
└── [email protected]

"database is not open" on NFS

we're (well.. the wife) users of the propriatary software mentioned in #63

they just say "must not use NFS" , and the reason appears somewhere in this library,
since leveldb itself works just fine on NFS.

When opening a db on NFS, we just get

FoundryVTT | 2023-07-18 18:46:15 | [error] Database is not open
Error: Database is not open
    at LevelDatabase.keys (/home/foundry/resources/app/node_modules/abstract-level/abstract-level.js:697:13)
    at LevelDatabase.compactFull (file:///home/foundry/resources/app/dist/database/backend/level-database.mjs:1:1647)
    at LevelDatabase.close (file:///home/foundry/resources/app/dist/database/backend/level-database.mjs:1:1171)
    at Module.disconnect (file:///home/foundry/resources/app/dist/database/database.mjs:1:1658)
    at World.deactivate (file:///home/foundry/resources/app/dist/packages/world.mjs:1:11206)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)

unfortunately the relevant code is obfuscated.

strace reveals no obvious reason why NFS would be a relevant factor. there is no call to flock.

/home/foundry # strace -f -p 87 2>&1 | grep messages
[pid    87] access("/data/Data/worlds/kmkm/data/messages", F_OK) = 0
[pid    94] mkdir("/data/Data/worlds/kmkm/data/messages", 0777 <unfinished ...>
[pid    96] statx(AT_FDCWD, "/data/Data/worlds/kmkm/data/messages", AT_STATX_SYNC_AS_STAT, STATX_ALL,  <unfinished ...>
[pid    95] mkdir("/data/Data/worlds/kmkm/data/messages", 0755 <unfinished ...>
[pid    95] rename("/data/Data/worlds/kmkm/data/messages/LOG", "/data/Data/worlds/kmkm/data/messages/LOG.old" <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/LOG", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666 <unfinished ...>
[pid    95] mkdir("/data/Data/worlds/kmkm/data/messages", 0755 <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/LOCK", O_RDWR|O_CREAT|O_LARGEFILE, 0644 <unfinished ...>
[pid    95] access("/data/Data/worlds/kmkm/data/messages/CURRENT", F_OK <unfinished ...>
[pid    95] open("/data/Data/worlds/kmkm/data/messages/CURRENT", O_RDONLY|O_LARGEFILE) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages/MANIFEST-000002", O_RDONLY|O_LARGEFILE) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages", O_RDONLY|O_LARGEFILE|O_CLOEXEC|O_DIRECTORY) = 39
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000003.log", O_RDONLY|O_LARGEFILE) = 39
[pid    95] readv(39, [{iov_base="|\360\244_ \r\1\1\0\0\0\0\0\0\0\2\0\0\0\1\32!messages!J"..., iov_len=32767}, {iov_base="", iov_len=1024}], 2) = 3367
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.ldb", O_WRONLY|O_CREAT|O_TRUNC|O_LARGEFILE, 0666) = 39
[pid    95] writev(39, [{iov_base="", iov_len=0}, {iov_base="\242\32\200\0\"\253\f!messages!JMUZweuXaLnRGbw"..., iov_len=1950}], 2) = 1950
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.ldb", O_RDONLY|O_LARGEFILE) = 39
[pid    95] stat("/data/Data/worlds/kmkm/data/messages/000005.ldb", {st_mode=S_IFREG|0644, st_size=2107, ...}) = 0
[pid    95] open("/data/Data/worlds/kmkm/data/messages/000005.sst", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
[pid    95] unlink("/data/Data/worlds/kmkm/data/messages/000005.ldb") = 0

felddy/foundryvtt-docker#735

Snapshots caveats

There are some caveats regarding snapshots and read/write ordering that are not entirely obvious.

e.g. in the following case:

const promise = db.put(...)
await db.get(...)
await promise

The put will not be visible for following reads until after the put has been completed, i.e. it is not enough to schedule the write for it to be visible in reads, it actually has to be completed first.

what are the different between them

I'm trying to use levelDB in my program. You have done lots of brilliant jobs to introduce levelDB to nodejs. But I found the repository make me a little confuse. Maybe there should be a graph to make it straight to users which will explain the relationship between them. It just my little suggestion.

I am not sure but I guess their relationships should be like this: the abstract is the version completely written by asyn/await version of js, the leveldown is wrap with convenient interface upon the c++ binary leveldb, and the levelup introduce nodejs promise like feature to leveldown.

A database modified in Linux fails to open in Windows?

We have been working with classic-level as the data backend for an application that supports Windows/Mac/Linux. The data stored by the application needs to be portable. We had not encountered problems with this, but I have today encountered a scary problem where a database most recently updated in a Linux environment can no longer be opened in Windows:

Context

  • The database was loaded and modified in Amazon Linux 2 using Node v16.19.0
  • The database is then attempted to be opened in Windows 11 using Node v18.13.0
  • Both environments using classic-level 1.2.0

Error

When attempting to call await db.open():

ModuleError: Failed to connect to database "actors": Database is not open
    at maybeOpened (C:\Users\aaclayton\Documents\Foundry\foundryvtt\node_modules\abstract-level\abstract-level.js:133:18)
    at C:\Users\aaclayton\Documents\Foundry\foundryvtt\node_modules\abstract-level\abstract-level.js:160:13
    at process.processTicksAndRejections (node:internal/process/task_queues:77:11) {
  code: 'LEVEL_DATABASE_NOT_OPEN',
: The filename, directory name, or volume label syntax is incorrect. C:\Users\aaclayton\Documents\Foundry\FoundryData\Data\worlds\ember-dev\data\actors/MANIFEST-000021
  ] {
    code: 'LEVEL_IO_ERROR'
  }
}

Contents of the database folder are as follows:

image

The CURRENT file does reference MANIFEST-000021.

This error is very alarming, because we had been operating under the expectation that the databases created and modified by LevelDB (and therefore classic-level) are portable. Is this expectation wrong? Are there any troubleshooting steps or further details that I can provide to help diagnose the problem?

The absolute path (location) which I'm providing to the ClassicLevel constructor is C:\Users\aaclayton\Documents\Foundry\FoundryData\Data\worlds\ember-dev\data\actors. Something internal to ClassicLevel (or LevelDB) is appending the manifest file to this path but using a / separator rather than a windows path separator. I don't know whether this is part of the issue or innocuous.

This error is not reproducible with every DB I create in a Linux environment and transfer to windows. I have had success moving databases around before (in either direction). I have not encountered this error before but it is highly troubling. Thank you very much for your guidance.

Auto open?

I might be misunderstanding something but I thought it was not necessary to open and wait for the db to open? However, I'm still getting a Database is not open error when doing something like:

const db = new ClassicLevel(...)
const data = db.getMany(...) /// kStatus === 'closed'

Workers

I see that leveldb does support access from multiple threads. Wondering what possibilities this have for improved performance, I.e in theory we could allow access to the same db from multiple workers?

When snapshot is created with `db.get()`

Hi, thank you for the great library.

I have a question about the document for snapshot with db.get().
https://github.com/Level/classic-level#dbgetkey-options-callback

The document said, the snapshot is created at the time db.get() was called.
I thought, this means that the snapshot is latest data for this calling.
However, the document also said, it should not see the data of simultaneous write operations.

Can the data be changed after db.get() was called (and the snapshot was created) and before db.get() returns the data?
That is, I understand that an iterator might return old data because that reads lazily it, but db.get() is not. Then I thought that db.get() returns latest data before something changes it.
Or, is there a way to get latest data?

Thank you.

Build error?

Trying to build fails with missing snappy header.

classic-level$ npm i

> [email protected] install
> node-gyp-build

gyp info it worked if it ends with ok
gyp info using [email protected]
gyp info using [email protected] | darwin | arm64
gyp info find Python using Python version 3.9.10 found at "/opt/homebrew/opt/[email protected]/bin/python3.9"
gyp info spawn /opt/homebrew/opt/[email protected]/bin/python3.9
gyp info spawn args [
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/gyp/gyp_main.py',
gyp info spawn args   'binding.gyp',
gyp info spawn args   '-f',
gyp info spawn args   'make',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/build/config.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/addon.gypi',
gyp info spawn args   '-I',
gyp info spawn args   '/Users/ronagy/Library/Caches/node-gyp/17.8.0/include/node/common.gypi',
gyp info spawn args   '-Dlibrary=shared_library',
gyp info spawn args   '-Dvisibility=default',
gyp info spawn args   '-Dnode_root_dir=/Users/ronagy/Library/Caches/node-gyp/17.8.0',
gyp info spawn args   '-Dnode_gyp_dir=/Users/ronagy/GitHub/classic-level/node_modules/node-gyp',
gyp info spawn args   '-Dnode_lib_file=/Users/ronagy/Library/Caches/node-gyp/17.8.0/<(target_arch)/node.lib',
gyp info spawn args   '-Dmodule_root_dir=/Users/ronagy/GitHub/classic-level',
gyp info spawn args   '-Dnode_engine=v8',
gyp info spawn args   '--depth=.',
gyp info spawn args   '--no-parallel',
gyp info spawn args   '--generator-output',
gyp info spawn args   'build',
gyp info spawn args   '-Goutput_dir=.'
gyp info spawn args ]
gyp info spawn make
gyp info spawn args [ 'BUILDTYPE=Release', '-C', 'build' ]
  CXX(target) Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o
In file included from ../deps/leveldb/leveldb-1.20/db/builder.cc:7:
In file included from ../deps/leveldb/leveldb-1.20/db/filename.h:14:
In file included from ../deps/leveldb/leveldb-1.20/port/port.h:16:
../deps/leveldb/leveldb-1.20/port/port_posix.h:43:10: fatal error: 'snappy.h' file not found
#include <snappy.h>
         ^~~~~~~~~~
1 error generated.
make: *** [Release/obj.target/leveldb/deps/leveldb/leveldb-1.20/db/builder.o] Error 1
gyp ERR! build error 
gyp ERR! stack Error: `make` failed with exit code: 2
gyp ERR! stack     at ChildProcess.onExit (/Users/ronagy/GitHub/classic-level/node_modules/node-gyp/lib/build.js:194:23)
gyp ERR! stack     at ChildProcess.emit (node:events:527:28)
gyp ERR! stack     at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)
gyp ERR! System Darwin 21.3.0
gyp ERR! command "/Users/ronagy/.nvm/versions/node/v17.8.0/bin/node" "/Users/ronagy/GitHub/classic-level/node_modules/.bin/node-gyp" "rebuild"
gyp ERR! cwd /Users/ronagy/GitHub/classic-level
gyp ERR! node -v v17.8.0
gyp ERR! node-gyp -v v9.0.0
gyp ERR! not ok 
npm ERR! code 1
npm ERR! path /Users/ronagy/GitHub/classic-level
npm ERR! command failed
npm ERR! command sh -c node-gyp-build

raw mode?

In order to get key + value we use entries which are an array of [key,val]. However, allocating an array for each key value pair is slow. Would be nice if we could somehow return a flattened array with key = xs[idx * 2 + 0], val = xs[idx * 2 + 1] and avoid O(n) array allocations.

Compacting the database on close: advice or feature request

Context

Our application opens and closes databases during its lifecycle. When we are done working with a certain database we would like to compact it so that it is closed in the most disk-space efficient format possible. It appears to me that the database is compacted when it is first opened, but not when it is closed.

Feature Request

Ideally, the db.close() method would support an option for doing this as part of the close workflow, something like:

await db.close({compact: true});

Other Solutions

We tried to perform compaction manually but encountered some unexpected outcomes and consequences. If the above feature request is not viable some advice would be helpful. My thought was to create a function that would compact the entire key range of the DB, as follows:

  /**
   * Compact the entire database.
   * See https://github.com/Level/classic-level#dbcompactrangestart-end-options-callback
   * @returns {Promise<void>}
   */
  async compactFull() {
    const i0 = this.keys({limit: 1, fillCache: false});
    const k0 = await i0.next();
    const i1 = this.keys({limit: 1, reverse: true, fillCache: false});
    const k1 = await i1.next();
    return this.compactRange(k0, k1, {keyEncoding: "utf8"});
  }

Unfortunately, calling this method is producing the opposite effect of what I had anticipated - disk utilization is nearly doubled. Here are the file sizes I measured:

// db.open()
1,088,289 bytes

// Modify some records
1,089,415 bytes

// Call compactFull()
2,245,000 bytes

// db.close()
2,245,000 bytes

// db.open()
1,156,988 bytes

// Modify some records
1,157,189 bytes

// Call db.compactFull()
2,244,613 bytes

// db.close()
2,244,613 bytes

// db.open()
1,088,522 bytes

Is there a flaw in the way I am using the compactRange method? Is there a misunderstanding of what should happen when compacting a range? Is there some other way to solve our use case?

Thank you for guidance!

Infinite compaction loop?

I'm trying to access an Ethereum DB which about 1TB in size.

Immediately on calling open(), the leveldb LOG file starts printing the following lines over and over again:

2023/01/25-19:00:25.972517 7f512de5e700 Compacting 1@2 + 8@3 files
2023/01/25-19:00:25.983790 7f512de5e700 Generated table #13798933@2: 1426 keys, 524408 bytes
2023/01/25-19:00:25.989548 7f512de5e700 Generated table #13798934@2: 1335 keys, 484178 bytes
2023/01/25-19:00:25.994915 7f512de5e700 Generated table #13798935@2: 1443 keys, 534255 bytes
2023/01/25-19:00:26.000565 7f512de5e700 Generated table #13798936@2: 1405 keys, 507921 bytes
2023/01/25-19:00:26.007436 7f512de5e700 Generated table #13798937@2: 1484 keys, 527112 bytes
2023/01/25-19:00:26.012568 7f512de5e700 Generated table #13798938@2: 1535 keys, 559004 bytes
2023/01/25-19:00:26.019553 7f512de5e700 Generated table #13798939@2: 2506 keys, 548458 bytes
2023/01/25-19:00:26.024701 7f512de5e700 Generated table #13798940@2: 2276 keys, 361713 bytes
2023/01/25-19:00:26.024714 7f512de5e700 Compacted 1@2 + 8@3 files => 4047049 bytes
2023/01/25-19:00:26.056418 7f512de5e700 compacted to: files[ 0 66 842 7135 65110 406111 105772 ]
2023/01/25-19:00:26.312935 7f512de5e700 Delete type=2 #13798056
2023/01/25-19:00:26.319244 7f512de5e700 Delete type=2 #13798932
2023/01/25-19:00:26.454692 7f512de5e700 Delete type=2 #13740731
2023/01/25-19:00:26.454883 7f512de5e700 Delete type=2 #13740732
2023/01/25-19:00:26.454990 7f512de5e700 Delete type=2 #13740733
2023/01/25-19:00:26.455085 7f512de5e700 Delete type=2 #13740734
2023/01/25-19:00:26.455182 7f512de5e700 Delete type=2 #13740735
2023/01/25-19:00:26.455278 7f512de5e700 Delete type=2 #13740736
2023/01/25-19:00:26.455373 7f512de5e700 Delete type=2 #13740737

Trying to read from the DB seems to take longer than usual, and the more I wait the slower the read speed becomes.

Any pointers at what to look at?
Can I somehow disable compaction when opening a DB?

Thanks!

No native build was found for platform=darwin arch=arm64

I am working on Mac M1 chipset, after executing "prebuild-darwin-x64+arm64": "prebuildify -t 8.14.0 --napi --strip --arch x64+arm64",
I got this error:
No native build was found for platform=darwin arch=arm64 runtime=node abi=111 uv=1 armv=8 libc=glibc node=19.8.1

Possible memory leak when putting a value with the same key

When putting a value into the db with an existing key (and the same value) I see that the size of the db folder just grows. Expected behaviour is that it will remain the same size if I re-insert an entry as the old value is to be overwritten.

I use

store.put(key, value, { valueEncoding: "view" })

I get the same results using the batch put method.
I get the same results using sync: true

Deleteing the entry seems to clear up all the data that has accumulated for all insertions on the same key.

I am using version "1.3.0"

QOL API methods

While we're contributing, we were wondering if you had any interest in any of the following quality of life methods we've added to our wrapper for ClassicLevel:

  1. compactFull - Convenience function that discovers the first and last key and calls compactRange
  2. size - Convenience function that discovers the first and last key and calls approximateSize
  3. has - Definitely a feature that would be better if it existed upstream, but iterates keys until specified key is discovered (if at all)

Poll: should the location directory be created recursively?

Follow-up for #6.

In the following example, should the location directory be created recursively? Such that, if the foo directory does not exist, it will be created (on open) rather than yielding an error?

const db = new ClassicLevel('foo/bar')

Creating it recursively is the current behavior of classic-level (and new compared to leveldown) which may break expectations given typical filesystem behavior, or it could be a convenient feature, if the database is considered to abstract away the filesystem.

React with thumbs up to create the directory recursively, react with thumbs down to yield an error. This question is about what the default behavior should be, so I'm purposefully not including a poll option to make either behavior opt-in.

Avoid passing options objects to C++

If we have to pass say fillCache and asBuffer options from JS to C++, it's faster to pass those as boolean arguments, rather than a { fillCache, asBuffer } object.

In cases where we can't replace use of options objects, we can still optimize that, by replacing napi_has_named_property() and napi_get_named_property() with just one napi_get_named_property() call and checking if the return value is napi_ok.

missing try/catch for bad_alloc

Leveldb doesn't throw exceptions per se. But both the bindings and leveldb itself uses stl containers which uses allocators that throw bad_alloc. We should probably have a try/catch for each entry point catching bad allocs and returning the corresponding leveldb::Status.

Support for readonly mode

It would be very helpful for our use case if classic-level were to support a readonly mode, where the database can be opened by multiple processes without compaction on startup, when those accessing the database do not need to make any changes.

There is an open issue for this feature on the upstream project already, though I believe that that project may be feature locked. I created this issue in the hopes that it may see some traction if such a feature is possible to implement on classic-level's level.

Segmentation Fault alpine linux in Docker on arm mac

I've faced with the Segmentation Fault on alpine linux running in docker on arm mac.

import { ClassicLevel } from 'classic-level';

const level = new ClassicLevel('./level', {
  keyEncoding: 'buffer',
  valueEncoding: 'buffer',
});

await level.put(Buffer.from('01', 'hex'), Buffer.from('01', 'hex'));

console.log('Done!');
FROM node:16-alpine

RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app

ENV NODE_ENV production
COPY package* /usr/src/app/
RUN npm ci
COPY . /usr/src/app/

CMD ["npm", "start"]
% docker build -t test . 
% docker run test

> [email protected] start
> node index.js

npm ERR! path /usr/src/app
npm ERR! command failed
npm ERR! signal SIGSEGV
npm ERR! command sh -c node index.js

npm ERR! A complete log of this run can be found in:
npm ERR!     /root/.npm/_logs/2022-05-26T19_40_15_807Z-debug-0.log

Error While Installing Hardhat

Hi All,
Facing an Issue while Installing Hardhat using yarn add --dev hardhat, there is an error:

error ./node_modules/classic-level: Command failed.
Exit code: 127
Command: node-gyp-build

Even tried using npm, Got same error.
npm Version of Error:

npm ERR! code 127
npm ERR! path ./node_modules/classic-level
npm ERR! command failed
npm ERR! command sh -c node-gyp-build
npm ERR! sh: 1: node-gyp-build: not found

My Node version: v18.14.0

Seek while next is executing

A little unsure but it seems the behavior of seeking while next is executing is a little undefined.

Seeking is done in the main thread and next in the worker thread so the order is currently possibly undefined, i.e. a seek that is done after next can affect the result of next.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.