Coder Social home page Coder Social logo

cdc-file-transfer's Introduction

CDC File Transfer

Born from the ashes of Stadia, this repository contains tools for syncing and streaming files from Windows to Windows or Linux. The tools are based on Content Defined Chunking (CDC), in particular FastCDC, to split up files into chunks.

History

At Stadia, game developers had access to Linux cloud instances to run games. Most developers wrote their games on Windows, though. Therefore, they needed a way to make them available on the remote Linux instance.

As developers had SSH access to those instances, they could use scp to copy the game content. However, this was impractical, especially with the shift to working from home during the pandemic with sub-par internet connections. scp always copies full files, there is no "delta mode" to copy only the things that changed, it is slow for many small files, and there is no fast compression.

To help this situation, we developed two tools, cdc_rsync and cdc_stream, which enable developers to quickly iterate on their games without repeatedly incurring the cost of transmitting dozens of GBs.

CDC RSync

cdc_rsync is a tool to sync files from a Windows machine to a Linux device, similar to the standard Linux rsync. It is basically a copy tool, but optimized for the case where there is already an old version of the files available in the target directory.

  • It quickly skips files if timestamp and file size match.
  • It uses fast compression for all data transfer.
  • If a file changed, it determines which parts changed and only transfers the differences.

cdc_rsync demo

The remote diffing algorithm is based on CDC. In our tests, it is up to 30x faster than the one used in rsync (1500 MB/s vs 50 MB/s).

The following chart shows a comparison of cdc_rsync and Linux rsync running under Cygwin on Windows. The test data consists of 58 development builds of some game provided to us for evaluation purposes. The builds are 40-45 GB large. For this experiment, we uploaded the first build, then synced the second build with each of the two tools and measured the time. For example, syncing from build 1 to build 2 took 210 seconds with the Cygwin rsync, but only 75 seconds with cdc_rsync. The three outliers are probably feature drops from another development branch, where the delta was much higher. Overall, cdc_rsync syncs files about 3 times faster than Cygwin rsync.

Comparison of cdc_rsync and Linux rsync running in Cygwin

We also ran the experiment with the native Linux rsync, i.e syncing Linux to Linux, to rule out issues with Cygwin. Linux rsync performed on average 35% worse than Cygwin rsync, which can be attributed to CPU differences. We did not include it in the figure because of this, but you can find it here.

How does it work and why is it faster?

The standard Linux rsync splits a file into fixed-size chunks of typically several KB.

Linux rsync uses fixed size chunks

If the file is modified in the middle, e.g. by inserting xxxx after 567, this usually means that the modified chunks as well as all subsequent chunks change.

Fixed size chunks after inserting data

The standard rsync algorithm hashes the chunks of the remote "old" file and sends the hashes to the local device. The local device then figures out which parts of the "new" file matches known chunks.

Syncing a file with the standard Linux rsync
Standard rsync algorithm

This is a simplification. The actual algorithm is more complicated and uses two hashes, a weak rolling hash and a strong hash, see here for a great overview. What makes rsync relatively slow is the "no match" situation where the rolling hash does not match any remote hash, and the algorithm has to roll the hash forward and perform a hash map lookup for each byte. rsync goes to great lengths optimizing lookups.

cdc_rsync does not use fixed-size chunks, but instead variable-size, content-defined chunks. That means, chunk boundaries are determined by the local content of the file, in practice a 64 byte sliding window. For more details, see the FastCDC paper or take a look at our implementation.

cdc_rsync uses variable, content-defined size chunks

If the file is modified in the middle, only the modified chunks, but not subsequent chunks change (unless they are less than 64 bytes away from the modifications).

Content-defined chunks after inserting data

Computing the chunk boundaries is cheap and involves only a left-shift, a memory lookup, an add and an and operation for each input byte. This is cheaper than the hash map lookup for the standard rsync algorithm.

Because of this, the cdc_rsync algorithm is faster than the standard rsync. It is also simpler. Since chunk boundaries move along with insertions or deletions, the task to match local and remote hashes is a trivial set difference operation. It does not involve a per-byte hash map lookup.

Syncing a file with cdc_rsync
cdc_rsync algorithm

CDC Stream

cdc_stream is a tool to stream files and directories from a Windows machine to a Linux device. Conceptually, it is similar to sshfs, but it is optimized for read speed.

  • It caches streamed data on the Linux device.
  • If a file is re-read on Linux after it changed on Windows, only the differences are streamed again. The rest is read from the cache.
  • Stat operations are very fast since the directory metadata (filenames, permissions etc.) is provided in a streaming-friendly way.

To efficiently determine which parts of a file changed, the tool uses the same CDC-based diffing algorithm as cdc_rsync. Changes to Windows files are almost immediately reflected on Linux, with a delay of roughly (0.5s + 0.7s x total size of changed files in GB).

cdc_stream demo

The tool does not support writing files back from Linux to Windows; the Linux directory is readonly.

The following chart compares times from starting a game to reaching the menu. In one case, the game is streamed via sshfs, in the other case we use cdc_stream. Overall, we see a 2x to 5x speedup.

Comparison of cdc_stream and sshfs

Supported Platforms

cdc_rsync From To
Windows x86_64 1
Ubuntu 22.04 x86_64 2
Ubuntu 22.04 aarch64
macOS 13 x86_64 3
macOS 13 aarch64 3
cdc_stream From To
Windows x86_64
Ubuntu 22.04 x86_64
Ubuntu 22.04 aarch64
macOS 13 x86_64 3
macOS 13 aarch64 3

1 Only local syncs, e.g. cdc_rsync C:\src\* C:\dst. Support for remote syncs is being added, see #61.
2 See #56.
3 See #62.

Getting Started

Download the precompiled binaries from the latest release to a Windows device and unzip them. The Linux binaries are automatically deployed to ~/.cache/cdc-file-transfer by the Windows tools. There is no need to manually deploy them. We currently provide Linux binaries compiled on Github's latest Ubuntu version. If the binaries work for you, you can skip the following two sections.

Alternatively, the project can be built from source. Some binaries have to be built on Windows, some on Linux.

Prerequisites for Building

To build the tools from source, the following steps have to be executed on both Windows and Linux.

  • Download and install Bazel from here. See workflow logs for the currently used version.
  • Clone the repository.
    git clone https://github.com/google/cdc-file-transfer
    
  • Initialize submodules.
    cd cdc-file-transfer
    git submodule update --init --recursive
    

Finally, install an SSH client on the Windows machine if not present. The file transfer tools require ssh.exe and sftp.exe.

Building

The two tools CDC RSync and CDC Stream can be built and used independently.

CDC RSync

  • On a Linux device, build the Linux components
    bazel build --config linux --compilation_mode=opt --linkopt=-Wl,--strip-all --copt=-fdata-sections --copt=-ffunction-sections --linkopt=-Wl,--gc-sections //cdc_rsync_server
    
  • On a Windows device, build the Windows components
    bazel build --config windows --compilation_mode=opt --copt=/GL //cdc_rsync
    
  • Copy the Linux build output file cdc_rsync_server from bazel-bin/cdc_rsync_server to bazel-bin\cdc_rsync on the Windows machine.

CDC Stream

  • On a Linux device, build the Linux components
    bazel build --config linux --compilation_mode=opt --linkopt=-Wl,--strip-all --copt=-fdata-sections --copt=-ffunction-sections --linkopt=-Wl,--gc-sections //cdc_fuse_fs
    
  • On a Windows device, build the Windows components
    bazel build --config windows --compilation_mode=opt --copt=/GL //cdc_stream
    
  • Copy the Linux build output files cdc_fuse_fs and libfuse.so from bazel-bin/cdc_fuse_fs to bazel-bin\cdc_stream on the Windows machine.

Usage

The tools require a setup where you can use SSH and SFTP from the Windows machine to the Linux device without entering a password, e.g. by using key-based authentication.

Configuring SSH and SFTP

By default, the tools search ssh.exe and sftp.exe from the path environment variable. If you can run the following commands in a Windows cmd without entering your password, you are all set:

Here, user is the Linux user and linux.device.com is the Linux host to SSH into or copy the file to.

If additional arguments are required, it is recommended to provide an SSH config file. By default, both ssh.exe and sftp.exe use the file at %USERPROFILE%\.ssh\config on Windows, if it exists. A possible config file that sets a username, a port, an identity file and a known host file could look as follows:

Host linux_device
	HostName linux.device.com
	User user
	Port 12345
	IdentityFile C:\path\to\id_rsa
	UserKnownHostsFile C:\path\to\known_hosts

If ssh.exe or sftp.exe cannot be found, you can specify the full paths via the command line arguments --ssh-command and --sftp-command for cdc_rsync and cdc_stream start (see below), or set the environment variables CDC_SSH_COMMAND and CDC_SFTP_COMMAND, e.g.

set CDC_SSH_COMMAND="C:\path with space\to\ssh.exe"
set CDC_SFTP_COMMAND="C:\path with space\to\sftp.exe"

Note that you can also specify SSH configuration via the environment variables instead of using a config file:

set CDC_SSH_COMMAND=C:\path\to\ssh.exe -p 12345 -i C:\path\to\id_rsa -oUserKnownHostsFile=C:\path\to\known_hosts
set CDC_SFTP_COMMAND=C:\path\to\sftp.exe -P 12345 -i C:\path\to\id_rsa -oUserKnownHostsFile=C:\path\to\known_hosts

Note the small -p for ssh.exe and the capital -P for sftp.exe.

Google Specific

For Google internal usage, set the following environment variables to enable SSH authentication using a Google security key:

set CDC_SSH_COMMAND=C:\gnubby\bin\ssh.exe
set CDC_SFTP_COMMAND=C:\gnubby\bin\sftp.exe

Note that you will have to touch the security key multiple times during the first run. Subsequent runs only require a single touch.

CDC RSync

cdc_rsync is used similar to scp or the Linux rsync command. To sync a single Windows file C:\path\to\file.txt to the home directory ~ on the Linux device linux.device.com, run

cdc_rsync C:\path\to\file.txt [email protected]:~

cdc_rsync understands the usual Windows wildcards * and ?.

cdc_rsync C:\path\to\*.txt [email protected]:~

To sync the contents of the Windows directory C:\path\to\assets recursively to ~/assets on the Linux device, run

cdc_rsync C:\path\to\assets\* [email protected]:~/assets -r

To get per file progress, add -v:

cdc_rsync C:\path\to\assets\* [email protected]:~/assets -vr

The tool also supports local syncs:

cdc_rsync C:\path\to\assets\* C:\path\to\destination -vr

CDC Stream

To stream the Windows directory C:\path\to\assets to ~/assets on the Linux device, run

cdc_stream start C:\path\to\assets [email protected]:~/assets

This makes all files and directories in C:\path\to\assets available on ~/assets immediately, as if it were a local copy. However, data is streamed from Windows to Linux as files are accessed.

To stop the streaming session, enter

cdc_stream stop [email protected]:~/assets

The command also accepts wildcards. For instance,

cdc_stream stop user@*:*

stops all existing streaming sessions for the given user.

Troubleshooting

On first run, cdc_stream starts a background service, which does all the work. The cdc_stream start and cdc_stream stop commands are just RPC clients that talk to the service.

The service logs to %APPDATA%\cdc-file-transfer\logs by default. The logs are useful to investigate issues with asset streaming. To pass custom arguments, or to debug the service, create a JSON config file at %APPDATA%\cdc-file-transfer\cdc_stream.json with command line flags. For instance,

{ "verbosity":3 }

instructs the service to log debug messages. Try cdc_stream start-service -h for a list of available flags. Alternatively, run the service manually with

cdc_stream start-service

and pass the flags as command line arguments. When you run the service manually, the flag --log-to-stdout is particularly useful as it logs to the console instead of to the file.

cdc_rsync always logs to the console. To increase log verbosity, pass -vvv for debug logs or -vvvv for verbose logs.

For both sync and stream, the debug logs contain all SSH and SFTP commands that are attempted to run, which is very useful for troubleshooting. If a command fails unexpectedly, copy it and run it in isolation. Pass -vv or -vvv for additional debug output.

cdc-file-transfer's People

Contributors

ayushgoel avatar chrschng avatar dbaarda avatar ljusten avatar patriosthegreat avatar pcc avatar timotk avatar wurwunchik avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cdc-file-transfer's Issues

Get rid of shell scripting in ssh commands

SSH is like a box of chocolates. You never know which shell you get.

On Windows, you may get cmd or powershell.
On Linux, you may get bash, zsh, fish,...

The different shells are not compatible, and it's very hard to form scripts that run in every shell, so we should get rid of them if possible. For instance, in bash you can write.

if true; then
   echo True
else
   echo False
fi

In fish, this becomes:

if true
   echo True
else
   echo False
end

We currently have 2 places that run non-trivial shell scripts:

  • CdcRsyncClient::StartServer() runs "if file does not exist, exit some_code, otherwise run it". The conditional is just a polish thing, so we can hide scary looking "command not found" errors and write "Server not deployed. Deploying...", but it can probably be solved in some other way.
  • PortManagerWin::FindAvailableRemotePorts() runs "if ss exists, run it, otherwise run netstat"*. We should find available ports in C++ code.

(*) change not landed yet as of Jan 17, 2023

problem with comping in SLES 12sp5 gcc7

Hi team!
Have a problem with comping in SLES 12sp5 Installed gcc7, using procedure described here :

linux-kazu:~/cdc-file-transfer # bazel build --config linux --compilation_mode=opt --linkopt=-Wl,--strip-all --copt=-fdata-sections --copt=-ffunction-sections --linkopt=-Wl,--gc-sections //cdc_rsync_server
Starting local Bazel server and connecting to it...
INFO: Analyzed target //cdc_rsync_server:cdc_rsync_server (46 packages loaded, 1269 targets configured).
INFO: Found 1 target...
ERROR: /root/.cache/bazel/_bazel_root/770fd58f11d597af46d8de95d1f6a2d3/external/com_google_protobuf/BUILD:470:10: Compiling src/google/protobuf/compiler/main.cc failed: undeclared inclusion(s) in rule '@com_google_protobuf//:protoc':
this rule is missing dependency declarations for the following files included by 'src/google/protobuf/compiler/main.cc':
  '/usr/lib64/gcc/x86_64-suse-linux/7/include/stdarg.h'
  '/usr/lib64/gcc/x86_64-suse-linux/7/include/stddef.h'
  '/usr/lib64/gcc/x86_64-suse-linux/7/include/stdint.h'
  '/usr/lib64/gcc/x86_64-suse-linux/7/include-fixed/limits.h'
  '/usr/lib64/gcc/x86_64-suse-linux/7/include-fixed/syslimits.h'
Target //cdc_rsync_server:cdc_rsync_server failed to build
Use --verbose_failures to see the command lines of failed build steps.
INFO: Elapsed time: 10.227s, Critical Path: 0.77s
INFO: 2 processes: 2 internal.
FAILED: Build did NOT complete successfully

Do you have any suggestions?

[cdc_rsync] Fails to overwrite directory with a file

Repro:

  • Non-empty folder exists in /path/to/foo
  • cdc_rsync.exe foo user@host:/path/to/ (where foo is a file)

This will report this error:

Error: Server returned error: remove() failed: Directory not empty.; Failed to remove folder '/path/to/test' before creating file '/mnt/developer/test'; Failed to copy files

Also, this error is not properly displayed in Visual Studio. VS just reports that sync failed with exit status 1.

Remove git submodules

Surprised you'd need this with a bazel build, but maybe there's something on the windows side which is forcing this. Anyway it would be nice not to involve git submodules if possible.

[common] Fix running file_finder_test on Linux

file_finder_test uses test data. When the test is run with bazel test, the test files are created as symlinks, but our test ignores symlinks.Instead, create test files manually in tmp.
Also add the test back in .github/workflows/*.yml

Benchmark against WSL2 and native Linux

The README describes a benchmark against Cygwin rsync as "linux rsync" which is true in some ways, but Cygwin is especially slow for I/O workloads (and slow in general). A more appropriate comparison would be comparing against WSL2 (or even WSL) rsync in a Windows environment. It would also be an interesting data point (though obviously orthogonal to the very use case of this tool) to see how it performs when compared against rsync in a fully native Linux environment.

EOF detected; Failed to receive packet of size 4

Microsoft Windows [Version 10.0.22621.2134]
d:\cdc>cdc_rsync.exe -vn c:\temp\* c:\tmp\
Port 6468: Server is listening
3 file(s) and 0 folder(s) found
�EOF detected; Failed to receive packet of size 4; Failed to receive packet; Message pump thread is down; Failed to dequeue packet; Failed to receive SetOptionsRequest; Failed to receive options�

Patching of file isn't working

I'm testing the behavior when a file hasn't changed by size or date but I'd still like to be able to break it up by CDC and patch. (Using the -c checksum flag to ensure it breaks the file up by chunks and look for existing chunks on the server.)

What I'm finding is that, even though a file hasn't changed, it's not finding any matching chunks on the server and ends up transmitting the entire file still.

It's clearly returning the right number of offsets and hashes from the server. But in TryAddChunks() where it searches for matching chunk.hash it's always coming back false.

This is with syncing a file that was just sync'd on the previous run.

Wondering if I'm doing something wrong or if there's some way the hashing algorithm differs between client (Windows 10, Intel Xeon) and server (Ubuntu, Threadripper) I was planning on verifying that next but wanted to ask whether you've seen this before.

Thanks

[cdc_stream] [cdc_rsync] Remove port args if port is not set

Right now, we always pass -p to ssh and -P to scp for ports. Don't do this if the SSH port is unset, so that ssh and scp use the default port 22. That's also the default port we set, but it's confusing if the port is also set in CDC_SSH_COMMAND and CDC_SCP_COMMAND as then the port is set twice. Note that the FIRST argument wins!

Edit: For ssh the first argument wins. For scp the second argument wins ⭕

fastcdc implementation sets chunk boundaries before last gear-hashed byte.

The fastcdc implementation adds data[i] in the gear rollsum, checks if the hash meets the chunk boundary criteria, and then returns i as the chunk length. Since data is zero indexed, this means the last byte included in the rollsum that met the boundary criteria is not included on the end the chunk, but ends up at the start of the next chunk.

This has an interesting degenerative case; if you get all the identified chunks for a file and re-arrange them into a different order, then cdc-file-transfer will find nearly zero duplicate chunks, even though every single chunk is duplicated. The only duplicates it will find are the ones where the byte after the chunk happens to be the same in both files.

In practice this probably rarely happens, as changes would rarely align exactly on the edge of chunk boundaries, but it would be a trivial fix to avoid this degenerative case.

I don't know how important preserving the chunking behaviour is for backwards compatibility... are "signatures" of the chunk lengths/hashes stored and used later, possibly by different versions?

I can easily put together a pull request to fix this if backwards-compatibility is not a problem.

Can cdc_rsync be used as a local Windows rsync tool?

I saw the following in another issue thread :

adding support for Windows to Windows cdc_rsync (including local syncs, i.e. not over network), see
main...sync_windows

Between the speed improvements you highlight from this tool and the fact that Windows doesn't have native support for rsync, I would love to use this tool to make backups for local drives.

But I wasn't able to understand the status of that work. Is the current status of the sync_windows (or win_support) branch at the point to support local Win rsync?

If the support comes, would it work like similarly to the following format? cdc_rsync E:\ F:\MyDrive\ -rv

[cdc_rsync] [cdc_stream] Use sftp for deployment

sftp allows to batch remote copy commands together with rename, chmod and mkdir commands, see https://man.openbsd.org/sftp. This way, the remote components can be deployed atomically with one sftp command.

For instance, for cdc_rsync we currently

  • Create %appdata%\cdc-file-transfer\bin in the server start command (because scp can't mkdir)
  • Call scp to copy cdc_rsync_server to a temp location
  • Call ssh to chmod and rename the temp location to the actual location
    sftp could do all of that in one command.

We probably have to write a local temp file with commands and call sftp -b (unless there's a way to pass commands via args).

[cdc_stream] Automatically start service

cdc_stream start should automatically start the service if it's not already running.
Optionally, if the last session stops, stop the service.

Note: The process must be created with the flag DETACHED_PROCESS.
Should be done after merging stream and the manager.

[cdc_rsync] [cdc_stream] Find better solution for checking remote forwarding port

Right now, cdc_rsync just uses a fixed forwarding port and fails if it's not available, whereas cdc_stream checks a range of 10 ports.

Suggestion:

  • Check remote forwarding port by default and add arg to pick fixed port.
  • Remove check_remote arg from PortManager::ReservePort and always assume true.
  • Add --forwarding_port arg, if set, don't use PortManager at all.

version `GLIBC_2.34' not found

Ubuntu 20.04

Server not deployed. Deploying...
/home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.34' not found (required by /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server) /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.33' not found (required by /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server)
/home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server: /lib/x86_64-linux-gnu/libc.so.6: version GLIBC_2.32' not found (required by /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server) /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server: /lib/x86_64-linux-gnu/libstdc++.so.6: version GLIBCXX_3.4.29' not found (required by /home/magic/.cache/cdc-file-transfer/bin/cdc_rsync_server)
Error: Failed to deploy the instance components for unknown reasons. Please report this issue.

netstat is discouraged in modern Linux; try ss first in Linux

In RHEL 7+ and Ubuntu 17.04+, netstat is deprecated and not installed by default.
This program works in Alma 9 only after running sudo ln -s `which ss` /usr/local/bin/netstat.
The option and output of ss is so similar to netstat that this program can use it as an alternative to netstat.

Document how cdc-file-transfer is installed

The readme never actually says that you just have to download the binaries on Windows and run them, and that the Linux binaries are auto-deployed. Some users assume that the binaries have to be deployed on Linux manually.

Do port detection in cdc_rsync_server and cdc_fuse_fs instead of running netstat/ss

Running netstat/ss via ssh to find available ports is slow and not very robust. It would be better to detect ports in C++ code. One problem right now is when cdc_rsync_server is started, the port must be known as port forwarding is set up at the same time.

Instead doing this:

  • Run netstat locally and remotely to find available ports
  • Run an ssh command that sets up port forwarding and executes cdc_rsync_server / cdc_fuse_fs
  • Connect to port

Do this:

  • Run an ssh command that executes cdc_rsync_server / cdc_fuse_fs
  • In cdc_rsync_server / cdc_fuse_fs, detect available ports (in C++ code!), print out "...is listening on port X" and block
  • In cdc_rsync / cdc_stream, read port, find available local port (in C++ code!), set up port forwarding and connect to port

Clean up mentiones of gamelet and related

Replace gamelet/instance by destination or target.
Make sure the help text of cdc_rsync works for both local and remote copies.
Make sure cdc_rsync doesn't assume Windows (requires Windows -> Windows support).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.