Coder Social home page Coder Social logo

google-research / big_transfer Goto Github PK

View Code? Open in Web Editor NEW
1.5K 42.0 174.0 830 KB

Official repository for the "Big Transfer (BiT): General Visual Representation Learning" paper.

Home Page: https://arxiv.org/abs/1912.11370

License: Apache License 2.0

Python 100.00%
deep-learning convolutional-neural-networks imagenet tensorflow2 jax pytorch transfer-learning

big_transfer's Introduction

Big Transfer (BiT): General Visual Representation Learning

by Alexander Kolesnikov, Lucas Beyer, Xiaohua Zhai, Joan Puigcerver, Jessica Yung, Sylvain Gelly, Neil Houlsby

Update 18/06/2021: We release new high performing BiT-R50x1 models, which were distilled from BiT-M-R152x2, see this section. More details in our paper "Knowledge distillation: A good teacher is patient and consistent".

Update 08/02/2021: We also release ALL BiT-M models fine-tuned on ALL 19 VTAB-1k datasets, see below.

Introduction

In this repository we release multiple models from the Big Transfer (BiT): General Visual Representation Learning paper that were pre-trained on the ILSVRC-2012 and ImageNet-21k datasets. We provide the code to fine-tuning the released models in the major deep learning frameworks TensorFlow 2, PyTorch and Jax/Flax.

We hope that the computer vision community will benefit by employing more powerful ImageNet-21k pretrained models as opposed to conventional models pre-trained on the ILSVRC-2012 dataset.

We also provide colabs for a more exploratory interactive use: a TensorFlow 2 colab, a PyTorch colab, and a Jax colab.

Installation

Make sure you have Python>=3.6 installed on your machine.

To setup Tensorflow 2, PyTorch or Jax, follow the instructions provided in the corresponding repository linked here.

In addition, install python dependencies by running (please select tf2, pytorch or jax in the command below):

pip install -r bit_{tf2|pytorch|jax}/requirements.txt

How to fine-tune BiT

First, download the BiT model. We provide models pre-trained on ILSVRC-2012 (BiT-S) or ImageNet-21k (BiT-M) for 5 different architectures: ResNet-50x1, ResNet-101x1, ResNet-50x3, ResNet-101x3, and ResNet-152x4.

For example, if you would like to download the ResNet-50x1 pre-trained on ImageNet-21k, run the following command:

wget https://storage.googleapis.com/bit_models/BiT-M-R50x1.{npz|h5}

Other models can be downloaded accordingly by plugging the name of the model (BiT-S or BiT-M) and architecture in the above command. Note that we provide models in two formats: npz (for PyTorch and Jax) and h5 (for TF2). By default we expect that model weights are stored in the root folder of this repository.

Then, you can run fine-tuning of the downloaded model on your dataset of interest in any of the three frameworks. All frameworks share the command line interface

python3 -m bit_{pytorch|jax|tf2}.train --name cifar10_`date +%F_%H%M%S` --model BiT-M-R50x1 --logdir /tmp/bit_logs --dataset cifar10

Currently. all frameworks will automatically download CIFAR-10 and CIFAR-100 datasets. Other public or custom datasets can be easily integrated: in TF2 and JAX we rely on the extensible tensorflow datasets library. In PyTorch, we use torchvision’s data input pipeline.

Note that our code uses all available GPUs for fine-tuning.

We also support training in the low-data regime: the --examples_per_class <K> option will randomly draw K samples per class for training.

To see a detailed list of all available flags, run python3 -m bit_{pytorch|jax|tf2}.train --help.

BiT-M models fine-tuned on ILSVRC-2012

For convenience, we provide BiT-M models that were already fine-tuned on the ILSVRC-2012 dataset. The models can be downloaded by adding the -ILSVRC2012 postfix, e.g.

wget https://storage.googleapis.com/bit_models/BiT-M-R50x1-ILSVRC2012.npz

Available architectures

We release all architectures mentioned in the paper, such that you may choose between accuracy or speed: R50x1, R101x1, R50x3, R101x3, R152x4. In the above path to the model file, simply replace R50x1 by your architecture of choice.

We further investigated more architectures after the paper's publication and found R152x2 to have a nice trade-off between speed and accuracy, hence we also include this in the release and provide a few numbers below.

BiT-M models fine-tuned on the 19 VTAB-1k tasks

We also release the fine-tuned models for each of the 19 tasks included in the VTAB-1k benchmark. We ran each model three times and release each of these runs. This means we release a total of 5x19x3=285 models, and hope these can be useful in further analysis of transfer learning.

The files can be downloaded via the following pattern:

wget https://storage.googleapis.com/bit_models/vtab/BiT-M-{R50x1,R101x1,R50x3,R101x3,R152x4}-run{0,1,2}-{caltech101,diabetic_retinopathy,dtd,oxford_flowers102,oxford_iiit_pet,resisc45,sun397,cifar100,eurosat,patch_camelyon,smallnorb-elevation,svhn,dsprites-orientation,smallnorb-azimuth,clevr-distance,clevr-count,dmlab,kitti-distance,dsprites-xpos}.npz

We did not convert these models to TF2 (hence there is no corresponding .h5 file), however, we also uploaded TFHub models which can be used in TF1 and TF2. An example sequence of commands for downloading one such model is:

mkdir BiT-M-R50x1-run0-caltech101.tfhub && cd BiT-M-R50x1-run0-caltech101.tfhub
wget https://storage.googleapis.com/bit_models/vtab/BiT-M-R50x1-run0-caltech101.tfhub/{saved_model.pb,tfhub_module.pb}
mkdir variables && cd variables
wget https://storage.googleapis.com/bit_models/vtab/BiT-M-R50x1-run0-caltech101.tfhub/variables/variables.{data@1,index}

Hyper-parameters

For reproducibility, our training script uses hyper-parameters (BiT-HyperRule) that were used in the original paper. Note, however, that BiT models were trained and finetuned using Cloud TPU hardware, so for a typical GPU setup our default hyper-parameters could require too much memory or result in a very slow progress. Moreover, BiT-HyperRule is designed to generalize across many datasets, so it is typically possible to devise more efficient application-specific hyper-parameters. Thus, we encourage the user to try more light-weight settings, as they require much less resources and often result in a similar accuracy.

For example, we tested our code using a 8xV100 GPU machine on the CIFAR-10 and CIFAR-100 datasets, while reducing batch size from 512 to 128 and learning rate from 0.003 to 0.001. This setup resulted in nearly identical performance (see Expected results below) in comparison to BiT-HyperRule, despite being less computationally demanding.

Below, we provide more suggestions on how to optimize our paper's setup.

Tips for optimizing memory or speed

The default BiT-HyperRule was developed on Cloud TPUs and is quite memory-hungry. This is mainly due to the large batch-size (512) and image resolution (up to 480x480). Here are some tips if you are running out of memory:

  1. In bit_hyperrule.py we specify the input resolution. By reducing it, one can save a lot of memory and compute, at the expense of accuracy.
  2. The batch-size can be reduced in order to reduce memory consumption. However, one then also needs to play with learning-rate and schedule (steps) in order to maintain the desired accuracy.
  3. The PyTorch codebase supports a batch-splitting technique ("micro-batching") via --batch_split option. For example, running the fine-tuning with --batch_split 8 reduces memory requirement by a factor of 8.

Expected results

We verified that when using the BiT-HyperRule, the code in this repository reproduces the paper's results.

CIFAR results (few-shot and full)

For these common benchmarks, the aforementioned changes to the BiT-HyperRule (--batch 128 --base_lr 0.001) lead to the following, very similar results. The table shows the min←median→max result of at least five runs. NOTE: This is not a comparison of frameworks, just evidence that all code-bases can be trusted to reproduce results.

BiT-M-R101x3

Dataset Ex/cls TF2 Jax PyTorch
CIFAR10 1 52.5 ← 55.8 → 60.2 48.7 ← 53.9 → 65.0 56.4 ← 56.7 → 73.1
CIFAR10 5 85.3 ← 87.2 → 89.1 80.2 ← 85.8 → 88.6 84.8 ← 85.8 → 89.6
CIFAR10 full 98.5 98.4 98.5 ← 98.6 → 98.6
CIFAR100 1 34.8 ← 35.7 → 37.9 32.1 ← 35.0 → 37.1 31.6 ← 33.8 → 36.9
CIFAR100 5 68.8 ← 70.4 → 71.4 68.6 ← 70.8 → 71.6 70.6 ← 71.6 → 71.7
CIFAR100 full 90.8 91.2 91.1 ← 91.2 → 91.4

BiT-M-R152x2

Dataset Ex/cls Jax PyTorch
CIFAR10 1 44.0 ← 56.7 → 65.0 50.9 ← 55.5 → 59.5
CIFAR10 5 85.3 ← 87.0 → 88.2 85.3 ← 85.8 → 88.6
CIFAR10 full 98.5 98.5 ← 98.5 → 98.6
CIFAR100 1 36.4 ← 37.2 → 38.9 34.3 ← 36.8 → 39.0
CIFAR100 5 69.3 ← 70.5 → 72.0 70.3 ← 72.0 → 72.3
CIFAR100 full 91.2 91.2 ← 91.3 → 91.4

(TF2 models not yet available.)

BiT-M-R50x1

Dataset Ex/cls TF2 Jax PyTorch
CIFAR10 1 49.9 ← 54.4 → 60.2 48.4 ← 54.1 → 66.1 45.8 ← 57.9 → 65.7
CIFAR10 5 80.8 ← 83.3 → 85.5 76.7 ← 82.4 → 85.4 80.3 ← 82.3 → 84.9
CIFAR10 full 97.2 97.3 97.4
CIFAR100 1 35.3 ← 37.1 → 38.2 32.0 ← 35.2 → 37.8 34.6 ← 35.2 → 38.6
CIFAR100 5 63.8 ← 65.0 → 66.5 63.4 ← 64.8 → 66.5 64.7 ← 65.5 → 66.0
CIFAR100 full 86.5 86.4 86.6

ImageNet results

These results were obtained using BiT-HyperRule. However, because this results in large batch-size and large resolution, memory can be an issue. The PyTorch code supports batch-splitting, and hence we can still run things there without resorting to Cloud TPUs by adding the --batch_split N command where N is a power of two. For instance, the following command produces a validation accuracy of 80.68 on a machine with 8 V100 GPUs:

python3 -m bit_pytorch.train --name ilsvrc_`date +%F_%H%M%S` --model BiT-M-R50x1 --logdir /tmp/bit_logs --dataset imagenet2012 --batch_split 4

Further increase to --batch_split 8 when running with 4 V100 GPUs, etc.

Full results achieved that way in some test runs were:

Ex/cls R50x1 R152x2 R101x3
1 18.36 24.5 25.55
5 50.64 64.5 64.18
full 80.68 85.15 WIP

VTAB-1k results

These are re-runs and not the exact paper models. The expected VTAB scores for two of the models are:

Model Full Natural Structured Specialized
BiT-M-R152x4 73.51 80.77 61.08 85.67
BiT-M-R101x3 72.65 80.29 59.40 85.75

Out of context dataset

In Appendix G of our paper, we investigate whether BiT improves out-of-context robustness. To do this, we created a dataset comprising foreground objects corresponding to 21 ILSVRC-2012 classes pasted onto 41 miscellaneous backgrounds.

To download the dataset, run

wget https://storage.googleapis.com/bit-out-of-context-dataset/bit_out_of_context_dataset.zip

Images from each of the 21 classes are kept in a directory with the name of the class.

Distilled models

We release top-performing compressed BiT models from our paper "Knowledge distillation: A good teacher is patient and consistent" on knoweldge distillation. In particular, we distill the BiT-M-R152x2 model (which was pre-trained on ImageNet-21k) to BiT-R50x1 models. As a result, we obtain compact models with very competitive performance.

Model Download link Resolution ImageNet top-1 acc. (paper)
BiT-R50x1 link 224 82.8
BiT-R50x1 link 160 80.5

For reproducibility, we also release weights of two BiT-M-R152x2 teacher models: pretrained at resolution 224 and resolution 384. See the paper for details on how these teachers were used.

Distillation code

We have no concrete plans for publishing the distillation code, as the recipe is simple and we imagine most people would integrate it in their existing training code. However, Sayak Paul has independently re-implemented the distillation setup in TensorFlow and nearly reproduced our results in several settings.

big_transfer's People

Contributors

akolesnikoff avatar andresusanopinto avatar jessicayung avatar lucasb-eyer avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

big_transfer's Issues

AttributeError: Layer block2 has no inbound nodes.

I'm trying to build an FPN network using the ResnetV2 backbone as implemented in TF2 models.py here. I have appended the below lines to the models.py:

if __name__ == "__main__":
	args_model = "BiT-M-R50x1"
	model = ResnetV2(
			num_units=NUM_UNITS[args_model],
			# num_outputs=21843,
			num_outputs=None,
			filters_factor=int(args_model[-1])*4,
			name="resnet",
			trainable=True,
			dtype=tf.float32)

	model.build((None, None, None, 3))
	
	# Get output layers.
	layer_names = ["block2", "block3", "block4"]
	layer_outputs = [model.get_layer(name).output for name in layer_names]
	for layer in layer_outputs:
		print(layer.shape)

I need to get the output of intermediate layers like block2, block3, and block4 in order to build the FPN pyramid, but while fetching the output of these layers I get the following error:

Traceback (most recent call last):
  File ".../big_transfer/bit_tf2/models.py", line 301, in <module>
    layer_outputs = [model.get_layer(name).output for name in layer_names]
  File ".../big_transfer/bit_tf2/models.py", line 301, in <listcomp>
    layer_outputs = [model.get_layer(name).output for name in layer_names]
  File "/home/himanshu/.conda/envs/tf2/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py", line 1827, in output
    raise AttributeError('Layer ' + self.name + ' has no inbound nodes.')
AttributeError: Layer block2 has no inbound nodes.

I want to use the pre-trained weights to initialize the FPN network later and hence didn't make any changes to the model code. Any suggestions on how do I solve this error?

Clarity on range normalization of inputs

Normalization range for input to the network is not clearly defined.

The tutorial says it should be normalized to the range 0-1 here
features['image'] = tf.cast(features['image'], tf.float32) / 255.0

The training code on the other hand normalizes it to -1 to +1 here
im = (im - 127.5) / 127.5

Among these, which is the correct one?

Loss function for pretraining

As mentioned in issue [https://github.com//issues/26] the loss is sigmoid binary cross entropy for each label. I have few more questions about the loss:

  1. How is the objects present in the image but label being zero accounted or handled?. For example: there is a picture containing : dog, cat but the label is only dog, and not {cat, animal}.
  2. Is there negative sampling done from all label classes for negatives or all the other classes are taken as negative?
    @lucasb-eyer @akolesnikoff @ebursztein @jessicayung @kolesman

Training my own dataset

Hi
what would be the data format and folder layout that I need to setup if I have my own dataset?

Thx

Request for more detail for pre-training on ImageNet-21k

Hi there!

I'd like to know the detail about how to train the network on ImageNet-21k. I carefully checked the paper and this repository, and I discovered lots of detail, but I cannot find which loss function is used for ImageNet-21k. In addition, I wonder how you process the multi-class labels for training. And also, it would be great if you give a detail of how to handle the unbalanced data distribution of ImageNet-21k. Could you tell me a bit more about those?

Thanks,

Object detection pre-trained weights

First of all kudos to the great work.

In the official blog post on Google AI blog, it is mentioned that BiT models pre-trained on large datasets perform better on object detection task as well. But in the repo couldn't find any details about object detection models.

Any plans to release object detection BiT models and tutorial/colab for fine tuning on custom dataset ?

Cheers.

Fine tuning from specific layer number

Is there a way where we can specify a specific layer number and start training specific layers with BIT using tensorflow. Currently number of layers it shows 2 and when I specify the layer number accuracy drops to single digit. Looks like its training from scratch. Looking for a correct way to fine tune

Reproducing problems

Hi,
Thanks for your great work of these two articles!
I am reproducing Knowledge distillation: A good teacher is patient and consistent using Flowers102 dataset. But I cannot reach the accuracy you got of Best from-scratch ResNet50(66.38%). Could you tell me the hyperparameter you used, such as the learning rate schedule, the optimizer?
Thanks.

[doc] Clarification on the input normalisation format.

Thank you for sharing the code and models!

If I had a small suggestion, I would recommend to describe the input format in the hub description. Something like: image input should be a float32 scaled around [-1, 1]. This might seems obvious but some models normalize each channel independently with custom mean/std.

I had to dig into the source code to find out how the image was normalized :

im = (im - 127.5) / 127.5

demo: BiT in (poké-)production for neural image search

Hi there, I made a demo using this BiT model for pokémon search. Result is pretty good, R50x1 gives reasonable performance in terms of efficiency (I set two replicas) and accuracy.

Checkout the code at here https://github.com/jina-ai/examples/tree/master/pokedex-with-bit
You can run it directly via Docker, feel free to extend it or try your own photo to see which pokémon are you.

btw, seems that BiT does not work out-of-the-box on tf 2.2.0, I met couple of errors. In the Docker image, I have to set tf version to 2.0.1 to make it work.

Invalid argument: slice index 0 of dimension 0 out of bounds.

Following is the error message that I got on fine tuning on CIFAR-100 dataset.

InvalidArgumentError: 2 root error(s) found.
(0) Invalid argument: slice index 0 of dimension 0 out of bounds.
[[{{node strided_slice_1}}]]
[[IteratorGetNext]]
[[IteratorGetNext/_2]]
(1) Invalid argument: slice index 0 of dimension 0 out of bounds.
[[{{node strided_slice_1}}]]
[[IteratorGetNext]]
0 successful operations.
0 derived errors ignored. [Op:__inference_train_function_141597]

Function call stack:
train_function -> train_function

Pre-trained Resnet v1 model

Hi,
Thanks for your great work! I noticed that all the released pre-trained resnet models belong to resnet v2 series, It would be really helpful if you would release pre-trained resnet v1 models as well, as the baseline for most of the downstream tasks(detection, segmentation) use resnet v1 models as their encoders. Thank you.

Jax from flax.nn to flax.linen?

Hi, folks - thanks for the useful release!

flax.nn is now deprecated, so a move to linen would help ensure this code remains future compatible (and doesn't generate warnings, as it does now). Not sure if you're intending to maintain this or leave it as a snapshot of what the paper was based upon, but it's a nice resource to have out there.

Contribution

How can I contribute to this I have read the guidelines and filled the CLA form as well but I am still confused about how to contribute?

Input size for r50x1?

What is the input shape for the Resnet 50x1 model? Is it 224x224x3? Thanks a lot for answering!

PyTorch BiT-M-ResNet152x4 model bug

Hello,

I got unexpected behaviors of BiT-M-ResNet152x4 on PyTorch. When I use BiT-M-R152x4-ILSVRC2012.npz on ImageNet-1k dataset, most of model outputs are nan. Hence, the resultant performance is very low. Given that other models work well, I think that there might be some bugs on the pretrained weights of BiT-M-R152x4.

Mobilenetv2 & MobilenetV3 pretrained models !

Dear authors:
Thanks for the beneficial research work! everyone doing cv will gain a lot from your pretrained models.
AFAK, lots of friends from industry are inteseted in these mobile-friendly models. It would be great if you could publish some edge-device pretrained models, such as mobilenetv2 and mobilenetv3, which is the son of Google! Thank you!

Model taking entire GPU

BIT model is lite weight but the model is taking entire GPU memory. Need a way to free up memory.

Top-1 accuracy on few-shot tasks

Hi all,

I am reproducing results on few-shot tasks (cifar10). After running train.py (by setting examples_per_class as 5), I find that the accuracy reported in this repo (see few-shot section) and paper is actually the top-5 accuracy. Am I correct?


Sorry for the confusing comments. I changed get_resolution when I ran the experiment, which is the reason why my previous runs cannot get good top-1 accuracy. When I recover the get_resolution function, I can fully reproduce the top-1 accuracy.

is it possible to speed up the prediction?

Hi, i'm using fine tuned bit model in my project and i wonder is there any way to speed up the prediction of the model? I want to get real time output but it seems quite hard. Thanks.

Release New Model

Release a new model trained on all images found in Google Images.

What is the purpose of the zero_head parameter?

if self.zero_head:

Hi there! I'm wondering what is the purpose of this zero_head parameter. It seems to me that if it is set to True then the weights of the head are initialized to zero, which causes the network to always output zeros for whatever input, and renders any further fine tuning of the model useless.

Should this be replaced with random initialization? Or maybe removed altogether, which lets PyTorch takes care of initializing the head?

TF2 weights shape wrong for 2 architectures

I've tried to load all provided TF2 model weights and found that 2 of them could not be loaded:

  • BiT-S-R50x3.h5 : ValueError: Cannot assign to variable standardized_conv2d/kernel:0 due to variable shape (7, 7, 3, 192) and value shape (7, 7, 3, 64) are incompatible
  • BiT-S-R101x1.h5 : Cannot assign to variable standardized_conv2d/kernel:0 due to variable shape (7, 7, 3, 64) and value shape (7, 7, 3, 256) are incompatible

All other weights loaded without problems.
Sample code to reproduce: https://colab.research.google.com/drive/1s2QtVgrj2HrDs64xGMi_GsOaFR3i95v0?usp=sharing

BiT as Image Classification on Mobile Application

Is It possible to do transfer learning with BiT then convert the model to .tflite format? My goal is to use TFLite model as an Image Classification to provide confidence of images on a mobile application.

Extract Features

I read your paper. The upstream pre-training input is 224. There are 128 and 480 inputs for fineturn in downstream tasks. If I want to use your pre-trained model to extract features, may I ask if my input should be set to 224, or should I use a larger input of 300 or even 480?
Looking forward to your reply.

Unable to replicate published benchmark accuracy on cifar10 dataset

I tried to finetune big transfer code for cifar 10 dataset using the colab provided in TF Blog

The dataset was split on 90% training and 10% validation.
The steps per epochs were modified as per dataset number of samples.
Bit hyper rule were kept as it is without any modification
Model used for finetuning was Resnet-m-50x1 and Resnet-m-101x3

Accuracy of model reached maximum of 94.3% for resnet-m-50x1 against the mentioned accuracy of 97.2
Accuracy for resnet-m-101x3 was reaching max of 93.2 against the mentioned accuracy of 98.5 and no further improvement was observed even running for some more epochs.

Accuracy source :- README.md

how to use my custom trained model

i finetuned BiT-R50 and save like bit.pth.tar
torch.save({
"step": step,
"model": model.state_dict(),
"optim" : optim.state_dict(),
}, savename)

but i can't use it for test like
checkpoint = torch.load(f'/srv/data/datasets/kyuhong/big_transfer/bit_logs/products/bit.pth.tar')
self.model.load_state_dict(checkpoint['model'])

how can i use my custom trained model

Plan to release weights pretrained on JFT-300M dataset?

Thanks for releasing this - when I saw the paper I really hoped it would be open sourced.

It seems the biggest gains can come from using the models pretrained on your internal JFT-300M dataset. Are there plans to release weights from the models pretrained on this dataset?

Cheers!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.