Coder Social home page Coder Social logo

kevinmusgrave / powerful-benchmarker Goto Github PK

View Code? Open in Web Editor NEW
426.0 10.0 44.0 24.99 MB

A library for ML benchmarking. It's powerful.

Shell 1.62% Jupyter Notebook 81.70% Python 16.69%
pytorch benchmarking computer-vision machine-learning deep-learning domain-adaptation metric-learning transfer-learning

powerful-benchmarker's People

Contributors

bryant1410 avatar kevinmusgrave avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

powerful-benchmarker's Issues

Bayesian optimization with custom dataset

It seems my API gets overwritten when using a custom dataset and bayesian optimization.

ERROR:root:[AttributeError("module 'powerful_benchmarker.api_parsers' has no attribute 'APITwoStreamMetricLoss'",)]

import logging
import argparse
import TwoStreamDataset
import pytorch_metric_learning.utils.common_functions as c_f
from powerful_benchmarker import api_parsers
from pytorch_metric_learning import losses
logging.getLogger().setLevel(logging.INFO)

class APITwoStreamMetricLoss(api_parsers.BaseAPIParser):

    def get_tester_kwargs(self):
        trainer_kwargs = super().get_tester_kwargs()
        trainer_kwargs["data_and_label_getter"] = c_f.return_input
        return trainer_kwargs

    def get_trainer_kwargs(self):
        trainer_kwargs = super().get_trainer_kwargs()
        trainer_kwargs["data_and_label_getter"] = c_f.return_input
        return trainer_kwargs

parser = argparse.ArgumentParser(allow_abbrev=False)
parser.add_argument("--pytorch_home", type=str, default=None)
parser.add_argument("--dataset_root", type=str, default="/data/")
parser.add_argument("--root_experiment_folder", type=str, default="/home/experiments")
parser.add_argument("--global_db_path", type=str, default=None)
parser.add_argument("--merge_argparse_when_resuming", default=False, action='store_true')
parser.add_argument("--root_config_folder", type=str, default="./")
parser.add_argument("--bayes_opt_iters", type=int, default=50)
parser.add_argument("--reproductions", type=str, default="0")

args, _ = parser.parse_known_args()

from powerful_benchmarker.runners.bayes_opt_runner import BayesOptRunner
args.reproductions = [int(x) for x in args.reproductions.split(",")]
runner = BayesOptRunner

r = runner(**(args.__dict__))
r.register("dataset", TwoStreamDataset)
r.register("api_parser", APITwoStreamMetricLoss)
r.run()

Test run error in easy_module_attribute_getter: TypeError: object() takes no parameters

Hello,

I wanted to run several experiments on your benchmarking system, but unfortunately, the test run doesn't work.
(python3 run.py --experiment_name test1 --dataset {CUB200: {download: True}})
I use the following versions:
pytorch 1.4.0
record-keeper 0.9.25
powerful-benchmarker 0.9.25
easy-module-attribute-getter 0.9.37
pytorch-metric-learning 0.9.86
faiss-gpu 1.6.3

The error is:
INFO:root:Importing packages in single_experiment_runner INFO:faiss:Loading faiss with AVX2 support. INFO:root:Importing packages in base_runner INFO:root:Done importing packages in base_runner INFO:root:Done importing packages in single_experiment_runner INFO:root:NUMPY_RANDOM = RandomState(MT19937) Using downloaded and verified file: /home/mayr/Documents/datasets/cub2011/CUB_200_2011.tgz INFO:root:Extracting dataset 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12005/12005 [00:04<00:00, 2868.62it/s] INFO:root:EMBEDDER MODEL MLP( (net): Sequential( (0): Linear(in_features=1024, out_features=128, bias=True) ) (last_linear): Linear(in_features=1024, out_features=128, bias=True) ) INFO:root:Setting dataset so that transform can be set Traceback (most recent call last): File "run.py", line 28, in <module> r.run() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/runners/single_experiment_runner.py", line 19, in run self.run_new_experiment_or_resume(self.YR) File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/runners/single_experiment_runner.py", line 34, in run_new_experiment_or_resume return self.start_experiment(args) File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/runners/single_experiment_runner.py", line 23, in start_experiment run_output = api_parser.run() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 55, in run return self.run_train_or_eval() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 63, in run_train_or_eval self.run_for_each_split_scheme() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 84, in run_for_each_split_scheme self.set_models_optimizers_losses() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 383, in set_models_optimizers_losses self.set_transforms() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 199, in set_transforms transforms = self.get_transforms() File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 193, in get_transforms transforms[k] = self.pytorch_getter.get_composed_img_transform(v, **model_transform_properties) File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/easy_module_attribute_getter/pytorch_getter.py", line 39, in get_composed_img_transform return self.get_composed_transform(transform_dict) File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/easy_module_attribute_getter/pytorch_getter.py", line 27, in get_composed_transform augmentations.append(self.get("transform", k, param)) File "/home/mayr/anaconda3/envs/dml/lib/python3.6/site-packages/easy_module_attribute_getter/easy_module_attribute_getter.py", line 24, in get return uninitialized(**params) TypeError: object() takes no parameters

Do you know what I'm doing wrong?
Thanks for your help!

Bayesian optimization to tune hyperparameters coding problem.

python run.py --bayes_opt_iters 4
--loss_funcsOVERRIDE {metric_loss: {MultiSimilarityLoss: {alphaLOG_BAYESIAN: [0.01, 100], betaLOG_BAYESIAN: [0.01, 100], baseBAYESIAN: [0, 1]}}}
--experiment_name cub_bayes_opt \

I ran this coding.It returned a mistake.

Traceback (most recent call last):
File "run.py", line 37, in
r.run()
File "/data/csd/anaconda3/envs/py37/lib/python3.7/site-packages/powerful_benchmarker/runners/bayes_opt_runner.py", line 121, in run
self.create_accuracy_report(best_sub_experiment_name)
File "/data/csd/anaconda3/envs/py37/lib/python3.7/site-packages/powerful_benchmarker/runners/bayes_opt_runner.py", line 187, in create_accuracy_report
eval_record_group_dicts = dummy_api_parser.get_eval_record_name_dict(return_all=True)
File "/data/csd/anaconda3/envs/py37/lib/python3.7/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 555, in get_eval_record_name_dict
self.tester_obj = self.pytorch_getter.get("tester", self.args.testing_method, self.get_tester_kwargs())
File "/data/csd/anaconda3/envs/py37/lib/python3.7/site-packages/powerful_benchmarker/api_parsers/base_api_parser.py", line 422, in get_tester_kwargs
"data_and_label_getter": self.split_manager.data_and_label_getter,
AttributeError: 'BaseAPIParser' object has no attribute 'split_manager'

Inference func/script

Hi,
Does anyone as implemented an inference func/script?
Given output of an experiment, (config files, weights file), and 2 images a function that will return the distance between the images according to the model's configuration.
I implemented my own, but I believe I made a mistake that I can't find out.
Thanks,
Shai

AttributeError("module 'torchvision.datasets' has no attribute 'CUB200'")]

Hi, I get an error while trying to run all framework.
The configuration file are taken from cub200_old_approach_triplet_batch_all .
Not sure what goes wrong ( I have dataset downloaded and path are fine)

Loading faiss with AVX2 support.
WARNING:root:[FileNotFoundError(2, 'No such file or directory'), AttributeError("module 'torchvision.datasets' has no attribute 'CUB200'")]
Traceback (most recent call last):
  File "run.py", line 99, in <module>
    run_new_experiment(YR, config_foldernames)
  File "run.py", line 87, in run_new_experiment
    return run(args)
  File "run.py", line 45, in run
    run_output = api_parser.run()
  File "/home/bartosz.ludwiczuk/CODE/powerful-benchmarker/api_parsers/base_api_parser.py", line 50, in run
    self.set_split_manager()
  File "/home/bartosz.ludwiczuk/CODE/powerful-benchmarker/api_parsers/base_api_parser.py", line 141, in set_split_manager
    chosen_dataset = self.pytorch_getter.get("dataset", yaml_dict=self.args.dataset, additional_params={"dataset_root":self.args.dataset_root})
  File "/home/bartosz.ludwiczuk/.conda/envs/powerful_benchmarker/lib/python3.7/site-packages/easy_module_attribute_getter/easy_module_attribute_getter.py", line 22, in get
    raise BaseException
BaseException

Specify some GPUs in parallel

Maybe I didn't look at the code in-depth and couldn't understand all coding. I can not find the place to specify some GPUs in parallel. I can specify in pytorch-metric-learning package.

Understanding why contrastive loss is consistently better than triplet loss

Hi Kevin, thanks for the exhaustive work and stellar benchmarking.
I have 2 questions here.

  1. Your results show that contrastive loss is consistently better than triplet loss. This runs against common (at least my personal) intuition that triplet loss demands less of the learned embedding space and makes learning easier. Would you mind suggesting some explanations as to why this intuition is incorrect?

  2. Are some losses easier to tune than others? Are some losses more robust to hyperparameter change / sampling strategy, and therefore easier to use in practice? My intuition suggests (maybe erroneously again) that a classification-based loss would have such properties (though temperature can be pesky too). This is not the focus of this work and is very hard to quantify, but some comments based on your personal user experience would be very valuable.

Best score for each model

What value is assumes to be the best for model in your CSV file?
Is is:

  • last model?
  • model which provide best score during training (ex. epoch 50/100)

I just look to SoftTriplet and they report last model.
In the other hand Multi-Similarity Loss report best iteration.

We need to be clear in evaluation scenario.

lr_scheduler

hello,i can't add lr_scheduler

config_optimizers/default.yaml i try to add CosineAnnealingLR

optimizers:
trunk_optimizer:
Adam:
lr: 0.00001
weight_decay: 0.00005
scheduler:
CosineAnnealingLR:
T_max: 10
embedder_optimizer:
Adam:
lr: 0.00001
weight_decay: 0.00005
scheduler:
CosineAnnealingLR:
T_max: 10

but i got

Traceback (most recent call last):
File "/home/lironghua/.pycharm_helpers/pydev/pydevd.py", line 1758, in
main()
File "/home/lironghua/.pycharm_helpers/pydev/pydevd.py", line 1752, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/lironghua/.pycharm_helpers/pydev/pydevd.py", line 1147, in run
pydev_imports.execfile(file, globals, locals) # execute the script
File "/home/lironghua/.pycharm_helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/home/lironghua/powerful_benchmarker-master/run.py", line 95, in
run_new_experiment(YR, config_foldernames)
File "/home/lironghua/powerful_benchmarker-master/run.py", line 83, in run_new_experiment
run(args)
File "/home/lironghua/powerful_benchmarker-master/run.py", line 45, in run
api_parser.run()
File "/home/lironghua/powerful_benchmarker-master/api_parsers/base_api_parser.py", line 55, in run
self.set_models_optimizers_losses()
File "/home/lironghua/powerful_benchmarker-master/api_parsers/base_api_parser.py", line 316, in set_models_optimizers_losses
self.set_optimizers()
File "/home/lironghua/powerful_benchmarker-master/api_parsers/base_api_parser.py", line 113, in set_optimizers
o, s, g = self.pytorch_getter.get_optimizer(param_source, yaml_dict=v)
File "/home/lironghua/.conda/envs/metric/lib/python3.7/site-packages/easy_module_attribute_getter/pytorch_getter.py", line 49, in get_optimizer
return_uninitialized=True
File "/home/lironghua/.conda/envs/metric/lib/python3.7/site-packages/easy_module_attribute_getter/easy_module_attribute_getter.py", line 9, in get
(class_name, params), = yaml_dict.items()
ValueError: too many values to unpack (expected 1)

Recreate Test set

I'm trying to recreate the split train/test exactly like in the experiment in raw python code.

These are my settings from config_dataset.yaml

split_manager:
  ClassDisjointSplitManager:
    test_size: 0.5
    test_start_idx: 0.5
    num_training_partitions: 4
    num_training_sets: 4
    hierarchy_level: 0
    data_and_label_getter_keys: null

Any (easy) way to do this?
I guess i would need to instantiate ClassDisjointSplitManager with the seed from the experiment but i dont know where i can find this info.

cross validation

num_training_partitions: 10
num_training_sets: 5
Translation:

The test set consists of classes with labels in [num_labels * start_idx, num_labels * (start_idx+size)]. Note that if we set start_idx to 0.9, the range would wrap around to the beginning (0.9 to 1, 0 to 0.4).
The remaining classes will be split into 10 equal sized partitions.
5 of those partitions will be used for training. In other words, 5-fold cross validation will be performed, but the size of the partitions will be the same as if 10-fold cross validation was being performed.

if i use cross validation what parameters will be tune and where can i set?

Improve config file location

Currently, the only practical way of using your own config files in addition to the default ones provided, is to download the config files into some folder, set --root_config_folder to that folder, and then put your custom config files in there.

Custom trainer

Hello!

I was trying to run the training with a custom trainer (inherited from trainers.MetricLossOnly), which I registered as

r = runner(**(args.__dict__))
r.register("trainer", MyTrainer)
r.run()

and correspondingly adjusted the config file
training_method: MyTrainer

However, the run.py scripts produces the following error:

AttributeError: module 'powerful_benchmarker.api_parsers' has no attribute 'APIMyTrainer'

What is the correct way to run a custom trainer? Basically, what I am trying to do is to modify positive pairs from a batch in a certain way (apply augmentations like in UnsupervisedEmbeddingsUsingAugmentations, but in a different way), maybe you can give some suggestions how it could be implemented in a different way?

Thanks in advance!

Test and validation accuracies

Im having difficulty understanding the experiment logs of bayesian hyperparameter optimization.

I want to verify:

  • Files and dirs named bayes_opt_* report TEST accuracies
  • <experiment_name><ts><trial number> subdirs which each contain Test50_50_Partitions_ report VAL accuracies on each partition

Question about "Metric Learning Reality Check" paper

Hi,

I'm not sure if this is an appropriate place, but I'd like to ask a question about your "A metric learning reality check" paper. It's an excellent work with great practical applications. Seems that a lot of 'progress' in deep metric learning field is due to some tweaking of parameters or flawed training/evaluation protocol.

But after reading the paper one thing is not clear to me. How training entities (pairs or triplets) are constructed? What, if any, miner (hard negative mining) is used? Specifically how pairs used in Contrastive loss and triplets in Triplet loss are constructed in each batch (containing C * M elements)? Are all possible pairs or triplets constructed for each batch? Or there's some form of hard negative mining used?

Maybe reported improvements in recent deep learning methods were due to better mining schemes, not to the formulation of the loss function itself?

Reproduce results

Hi,
I want to reproduce one of the results from your CSV file. I downloaded triplet config. How can I feed it into benchmarker?

I found one way:

  • copy all configs to related confing-directory (mean config_eval.yaml to configs/config_eval/) and rename to triplet.yaml
  • run by command
python run.py --experiment_name triplet_v1 --config_general default triplet --config_models default triplet --config_optimizers default triplet --config_loss_and_miners default triplet --config_transforms default triplet --config_eval default triplet

I need to have two configs (default and triplet) as ex. eval_pca is not defined in triplet configs.
Is this the correct way of reproducing the results? Or there exist better way?

where's the code corresponding to "special_split_scheme"?

Thank you for releasing powerful-benchmarkder! Can you specify where the code for the configuration option "special_split_scheme" is? I notice that the number reported in the 4-fold cross validation excel file does not perform worse than the one trained with original 50/50 train-test split, correct?

Subclass BaseAPIParser has no attribute 'split_manager'

After finishing bayes trails i got the stacktrace object has no attribute 'split_manager'

Does a subclass of BaseAPIParser need to implement a 'split_manager'? And can I still resume the experiment at this point?

Im running pytorch_metric_learning='0.9.87.dev4' and powerful-benchmarker='0.9.26'
And my subclass is defined as follows:

class APITwoStreamMetricLoss(api_parsers.BaseAPIParser):
    
    def get_tester_kwargs(self):
        trainer_kwargs = super().get_tester_kwargs()
        trainer_kwargs["accuracy_calculator"] = SequencePrecisionAtK()
        return trainer_kwargs

    def get_trainer_kwargs(self):
        trainer_kwargs = super().get_trainer_kwargs()
        return trainer_kwargs

Problem with hp optimisation

Hi! I run hp optimization (on my server) and after 1st trial it's freezing.
The only thing I changed - I've added my own dataset class and changed the corresponding configuration parameter. Previously, I used to run run.py w/o any problems.

Double slash in the log bellow looks suspicious bayesian_optimizer_logs//log00000.json

Mu run command python run_bayesian_optimization.py --bayesian_optimization_n_iter 50 --loss_funcs~OVERRIDE~ {met ric_loss: {MultiSimilarityLoss: {alpha~BAYESIAN~: [0.01, 50], beta~BAYESIAN~: [0.01, 50], base~BAYESIAN~: [0, 1]}}} --mining_funcs~OVERRIDE~ {post_g radient_miner: {MultiSimilarityMiner: {epsilon~BAYESIAN~: [0, 1]}}} --experiment_name test5050_multi_similarity_with_ms_miner --root_experiment_fold er experiments_opt --pytorch_home models

Could you please help? Thanks!

INFO:root:embedding dimensionality is 128                                                                                                           
WARNING clustering 45 points to 9 centroids: please provide at least 351 training points                                                            
INFO:root:New best accuracy!                                                                                                                        
INFO:root:SPLIT: Test50_50_Partitions4_3 / train / length 140                                                                                       
INFO:root:TRAINING EPOCH 3                                                                                                                          
total_loss=0.27627: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:19<00:00,  5.08it/s]
INFO:root:TRAINING EPOCH 4                                                                                                                          
total_loss=0.18487: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 100/100 [00:19<00:00,  5.10it/s]
INFO:root:COLLECTING DATASETS FOR EVAL                                                                                                              
INFO:root:SPLIT: Test50_50_Partitions4_3 / train / length 140                                                                                       
INFO:root:SPLIT: Test50_50_Partitions4_3 / val / length 45                                                                                          
INFO:root:Evaluating epoch 4                                                                                                                        
INFO:root:Getting embeddings for the train split                                                                                                    
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 12.46it/s]
INFO:root:Getting embeddings for the val split                                                                                                      
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00,  7.73it/s]
INFO:root:Computing accuracy for the train split                                                                                                    
INFO:root:running k-nn with k=5                                                                                                                     
INFO:root:embedding dimensionality is 128                                                                                                           
WARNING clustering 140 points to 28 centroids: please provide at least 1092 training points                                                         
INFO:root:Computing accuracy for the val split                                                                                                      
INFO:root:running k-nn with k=5                                                                                                                     
INFO:root:embedding dimensionality is 128                                                                                                           
WARNING clustering 45 points to 9 centroids: please provide at least 351 training points                                                            
[INFO 01-25 14:03:10] ax.service.ax_client: Completed trial 0 with data: {'mean_average_r_precision': (0.57, 0.03)}.                                
[INFO 01-25 14:03:10] ax.service.ax_client: Saved JSON-serialized state of optimization to `experiments_opt/bayesian_optimizer_logs//log00000.json`.```

Reason to freeze_batchnorm

I sampled some results from the benchmark result page and see you tend to freeze_batchnorm=True. What is the reason to freeze_batchnorm?

Use benchmark for new datasets

Hello Kevin
I really appreciate your work in metric learning from pytorch-metric learning to reality check.
While using this benchmarker, I was able to easily use the datasets and losses provided.
However, I would like to use my own dataset with this benchmarker, but I am having difficulty.
My dataset consist of subfolders with class names and images in them. Is there a simple way to do this?
I apologize in advance if this is a simple question (I am fairly new to this field).
Thank you, and keep up the good work!

Make it easier to change bayes opt bounds

Currently, if you've already started bayesian optimization and you realize you want to change the bounds, you have to manually edit the Ax json log file. This is very hacky.

Move trainer and tester config options into object form

All the eval options like eval_reference_set, eval_batch_size etc. should be specified as parameters of an object, just like loss functions, models etc. The same goes for trainer parameters like freeze_batchnorm and batch_size.

Reduce Dependency on Command Line

On certain platforms (you know the ones I'm talking about) we don't have the luxury of running a python script (e.g. run.py) from a notebook environment. It would be great if the dependency on command line arguments (and on-disk config files for that matter) could be reduced or eliminated allowing all these parameters to be specified in code only. For example I currently need code like

sys.argv = ["run.py", "--experiment_name", 'MCMBenchmark']

to make the runner not crash. This isn't critical since I have workarounds but I figured I'd bookmark it here.

How to use the Bayesian optimization to tune ArcFaceLoss hyperparameters?

I ran the code below. And returned the TypeEorror. I don't know how to add the embedding_size argument correctly.
python run.py --bayes_opt_iters 20
--loss_funcsOVERRIDE {metric_loss: {ArcFaceLoss: {embedding_size: 128, lrBAYESIAN: [0, 1], marginBAYESIAN: [0, 100], scaleBAYESIAN: [0, 100]}}}
--experiment_name bayes_opt20_ArcFaceLoss \

TypeError: init() missing 1 required positional argument: 'embedding_size'

Hyperparameter for Triplet-Loss

Hi,
I'm investigating further your code/methodology of testing. Then I have some question.
You decided to choose margin for triplet just 0.01 when the default value is 0.2.
Any reason/paper for that? Did you get this value after some experiments or so?

Model 0 should be saved as "best" before training starts

The model for epoch 0 (untrained trunk+embedder) should be saved as the "best". Otherwise if there is no improvement in accuracy, there will be no "best" model saved, which causes a confusing bug as seen in this issue #48

The reason it isn't saved is because run_tester_separately doesn't save models, whereas the end of epoch hook does.

Number of Sobol steps is assumed to be 5

In BayesOptRunner, number of sobol steps is assumed to be 5, but it's actually max(5, num_hyperparameters). Didn't notice because I've never optimized more than 5 hyperparameters.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.