Coder Social home page Coder Social logo

huntermcgushion / hyperparameter_hunter Goto Github PK

View Code? Open in Web Editor NEW
701.0 24.0 103.0 7.45 MB

Easy hyperparameter optimization and automatic result saving across machine learning algorithms and libraries

License: MIT License

Python 99.28% Makefile 0.05% Shell 0.68%
artificial-intelligence machine-learning hyperparameter-optimization hyperparameter-tuning neural-network keras scikit-learn xgboost lightgbm catboost deep-learning data-science python rgf sklearn optimization experimentation feature-engineering ai ml

hyperparameter_hunter's Introduction

HyperparameterHunter

HyperparameterHunter Overview

Build Status Documentation Status Coverage Status codecov Maintainability Codacy Badge

PyPI version Downloads Donate Code style: black

Automatically save and learn from Experiment results, leading to long-term, persistent optimization that remembers all your tests.

HyperparameterHunter provides a wrapper for machine learning algorithms that saves all the important data. Simplify the experimentation and hyperparameter tuning process by letting HyperparameterHunter do the hard work of recording, organizing, and learning from your tests — all while using the same libraries you already do. Don't let any of your experiments go to waste, and start doing hyperparameter optimization the way it was meant to be.

Features

  • Automatically record Experiment results
  • Truly informed hyperparameter optimization that automatically uses past Experiments
  • Eliminate boilerplate code for cross-validation loops, predicting, and scoring
  • Stop worrying about keeping track of hyperparameters, scores, or re-running the same Experiments
  • Use the libraries and utilities you already love

How to Use HyperparameterHunter

Don’t think of HyperparameterHunter as another optimization library that you bring out only when its time to do hyperparameter optimization. Of course, it does optimization, but its better to view HyperparameterHunter as your own personal machine learning toolbox/assistant.

The idea is to start using HyperparameterHunter immediately. Run all of your benchmark/one-off experiments through it.

The more you use HyperparameterHunter, the better your results will be. If you just use it for optimization, sure, it’ll do what you want, but that’s missing the point of HyperparameterHunter.

If you’ve been using it for experimentation and optimization along the entire course of your project, then when you decide to do hyperparameter optimization, HyperparameterHunter is already aware of all that you’ve done, and that’s when HyperparameterHunter does something remarkable. It doesn’t start optimization from scratch like other libraries. It starts from all of the Experiments and previous optimization rounds you’ve already run through it.

Getting Started

1) Environment:

Set up an Environment to organize Experiments and Optimization results.
Any Experiments or Optimization rounds we perform will use our active Environment.

from hyperparameter_hunter import Environment, CVExperiment
import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import StratifiedKFold

data = load_breast_cancer()
df = pd.DataFrame(data=data.data, columns=data.feature_names)
df['target'] = data.target

env = Environment(
    train_dataset=df,  # Add holdout/test dataframes, too
    results_path='path/to/results/directory',  # Where your result files will go
    metrics=['roc_auc_score'],  # Callables, or strings referring to `sklearn.metrics`
    cv_type=StratifiedKFold,  # Class, or string in `sklearn.model_selection`
    cv_params=dict(n_splits=5, shuffle=True, random_state=32)
)

2) Individual Experimentation:

Perform Experiments with your favorite libraries simply by providing model initializers and hyperparameters

Keras
# Same format used by `keras.wrappers.scikit_learn`. Nothing new to learn
def build_fn(input_shape):  # `input_shape` calculated for you
    model = Sequential([
        Dense(100, kernel_initializer='uniform', input_shape=input_shape, activation='relu'),
        Dropout(0.5),
        Dense(1, kernel_initializer='uniform', activation='sigmoid')
    ])  # All layer arguments saved (whether explicit or Keras default) for future use
    model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
    return model

experiment = CVExperiment(
    model_initializer=KerasClassifier,
    model_init_params=build_fn,  # We interpret your build_fn to save hyperparameters in a useful, readable format
    model_extra_params=dict(
        callbacks=[ReduceLROnPlateau(patience=5)],  # Use Keras callbacks
        batch_size=32, epochs=10, verbose=0  # Fit/predict arguments
    )
)
SKLearn
experiment = CVExperiment(
    model_initializer=LinearSVC,  # (Or any of the dozens of other SK-Learn algorithms)
    model_init_params=dict(penalty='l1', C=0.9)  # Default values used and recorded for kwargs not given
)
XGBoost
experiment = CVExperiment(
    model_initializer=XGBClassifier,
    model_init_params=dict(objective='reg:linear', max_depth=3, n_estimators=100, subsample=0.5)
)
LightGBM
experiment = CVExperiment(
    model_initializer=LGBMClassifier,
    model_init_params=dict(boosting_type='gbdt', num_leaves=31, max_depth=-1, min_child_samples=5, subsample=0.5)
)
CatBoost
experiment = CVExperiment(
    model_initializer=CatboostClassifier,
    model_init_params=dict(iterations=500, learning_rate=0.01, depth=7, allow_writing_files=False),
    model_extra_params=dict(fit=dict(verbose=True))  # Send kwargs to `fit` and other extra methods
)
RGF
experiment = CVExperiment(
    model_initializer=RGFClassifier,
    model_init_params=dict(max_leaf=1000, algorithm='RGF', min_samples_leaf=10)
)

3) Hyperparameter Optimization:

Just like Experiments, but if you want to optimize a hyperparameter, use the classes imported below

from hyperparameter_hunter import Real, Integer, Categorical
from hyperparameter_hunter import optimization as opt
Keras
def build_fn(input_shape):
    model = Sequential([
        Dense(Integer(50, 150), input_shape=input_shape, activation='relu'),
        Dropout(Real(0.2, 0.7)),
        Dense(1, activation=Categorical(['sigmoid', 'softmax']))
    ])
    model.compile(
        optimizer=Categorical(['adam', 'rmsprop', 'sgd', 'adadelta']),
        loss='binary_crossentropy', metrics=['accuracy']
    )
    return model

optimizer = opt.RandomForestOptPro(iterations=7)
optimizer.forge_experiment(
    model_initializer=KerasClassifier,
    model_init_params=build_fn,
    model_extra_params=dict(
        callbacks=[ReduceLROnPlateau(patience=Integer(5, 10))],
        batch_size=Categorical([32, 64]),
        epochs=10, verbose=0
    )
)
optimizer.go()
SKLearn
optimizer = opt.DummyOptPro(iterations=42)
optimizer.forge_experiment(
    model_initializer=AdaBoostClassifier,  # (Or any of the dozens of other SKLearn algorithms)
    model_init_params=dict(
        n_estimators=Integer(75, 150),
        learning_rate=Real(0.8, 1.3),
        algorithm='SAMME.R'
    )
)
optimizer.go()
XGBoost
optimizer = opt.BayesianOptPro(iterations=10)
optimizer.forge_experiment(
    model_initializer=XGBClassifier,
    model_init_params=dict(
        max_depth=Integer(low=2, high=20),
        learning_rate=Real(0.0001, 0.5),
        n_estimators=200,
        subsample=0.5,
        booster=Categorical(['gbtree', 'gblinear', 'dart']),
    )
)
optimizer.go()
LightGBM
optimizer = opt.BayesianOptPro(iterations=100)
optimizer.forge_experiment(
    model_initializer=LGBMClassifier,
    model_init_params=dict(
        boosting_type=Categorical(['gbdt', 'dart']),
        num_leaves=Integer(5, 20),
        max_depth=-1,
        min_child_samples=5,
        subsample=0.5
    )
)
optimizer.go()
CatBoost
optimizer = opt.GradientBoostedRegressionTreeOptPro(iterations=32)
optimizer.forge_experiment(
    model_initializer=CatBoostClassifier,
    model_init_params=dict(
        iterations=100,
        eval_metric=Categorical(['Logloss', 'Accuracy', 'AUC']),
        learning_rate=Real(low=0.0001, high=0.5),
        depth=Integer(4, 7),
        allow_writing_files=False
    )
)
optimizer.go()
RGF
optimizer = opt.ExtraTreesOptPro(iterations=10)
optimizer.forge_experiment(
    model_initializer=RGFClassifier,
    model_init_params=dict(
        max_leaf=1000,
        algorithm=Categorical(['RGF', 'RGF_Opt', 'RGF_Sib']),
        l2=Real(0.01, 0.3),
        normalize=Categorical([True, False]),
        learning_rate=Real(0.3, 0.7),
        loss=Categorical(['LS', 'Expo', 'Log', 'Abs'])
    )
)
optimizer.go()

Output File Structure

This is a simple illustration of the file structure you can expect your Experiments to generate. For an in-depth description of the directory structure and the contents of the various files, see the File Structure Overview section in the documentation. However, the essentials are as follows:

  1. An Experiment adds a file to each HyperparameterHunterAssets/Experiments subdirectory, named by experiment_id
  2. Each Experiment also adds an entry to HyperparameterHunterAssets/Leaderboards/GlobalLeaderboard.csv
  3. Customize which files are created via Environment's file_blacklist and do_full_save kwargs (documented here)
HyperparameterHunterAssets
|   Heartbeat.log
|
└───Experiments
|   |
|   └───Descriptions
|   |   |   <Files describing Experiment results, conditions, etc.>.json
|   |
|   └───Predictions<OOF/Holdout/Test>
|   |   |   <Files containing Experiment predictions for the indicated dataset>.csv
|   |
|   └───Heartbeats
|   |   |   <Files containing the log produced by the Experiment>.log
|   |
|   └───ScriptBackups
|       |   <Files containing a copy of the script that created the Experiment>.py
|
└───Leaderboards
|   |   GlobalLeaderboard.csv
|   |   <Other leaderboards>.csv
|
└───TestedKeys
|   |   <Files named by Environment key, containing hyperparameter keys>.json
|
└───KeyAttributeLookup
    |   <Files linking complex objects used in Experiments to their hashes>

Installation

pip install hyperparameter-hunter

If you like being on the cutting-edge, and you want all the latest developments, run:

pip install git+https://github.com/HunterMcGushion/hyperparameter_hunter.git

If you want to contribute to HyperparameterHunter, get started here.

I Still Don't Get It

That's ok. Don't feel bad. It's a bit weird to wrap your head around. Here's an example that illustrates how everything is related:

from hyperparameter_hunter import Environment, CVExperiment, BayesianOptPro, Integer
from hyperparameter_hunter.utils.learning_utils import get_breast_cancer_data
from xgboost import XGBClassifier

# Start by creating an `Environment` - This is where you define how Experiments (and optimization) will be conducted
env = Environment(
    train_dataset=get_breast_cancer_data(target='target'),
    results_path='HyperparameterHunterAssets',
    metrics=['roc_auc_score'],
    cv_type='StratifiedKFold',
    cv_params=dict(n_splits=10, shuffle=True, random_state=32),
)

# Now, conduct an `Experiment`
# This tells HyperparameterHunter to use the settings in the active `Environment` to train a model with these hyperparameters
experiment = CVExperiment(
    model_initializer=XGBClassifier,
    model_init_params=dict(
        objective='reg:linear',
        max_depth=3
    )
)

# That's it. No annoying boilerplate code to fit models and record results
# Now, the `Environment`'s `results_path` directory will contain new files describing the Experiment just conducted

# Time for the fun part. We'll set up some hyperparameter optimization by first defining the `OptPro` (Optimization Protocol) we want
optimizer = BayesianOptPro(verbose=1)

# Now we're going to say which hyperparameters we want to optimize.
# Notice how this looks just like our `experiment` above
optimizer.forge_experiment(
    model_initializer=XGBClassifier,
    model_init_params=dict(
        objective='reg:linear',  # We're setting this as a constant guideline - Not one to optimize
        max_depth=Integer(2, 10)  # Instead of using an int like the `experiment` above, we provide a space to search
    )
)
# Notice that our range for `max_depth` includes the `max_depth=3` value we used in our `experiment` earlier

optimizer.go()  # Now, we go

assert experiment.experiment_id in [_[2] for _ in optimizer.similar_experiments]
# Here we're verifying that the `experiment` we conducted first was found by `optimizer` and used as learning material
# You can also see via the console that we found `experiment`'s saved files, and used it to start optimization

last_experiment_id = optimizer.current_experiment.experiment_id
# Let's save the id of the experiment that was just conducted by `optimizer`

optimizer.go()  # Now, we'll start up `optimizer` again...

# And we can see that this second optimization round learned from both our first `experiment` and our first optimization round
assert experiment.experiment_id in [_[2] for _ in optimizer.similar_experiments]
assert last_experiment_id in [_[2] for _ in optimizer.similar_experiments]
# It even did all this without us having to tell it what experiments to learn from

# Now think about how much better your hyperparameter optimization will be when it learns from:
# - All your past experiments, and
# - All your past optimization rounds
# And the best part: HyperparameterHunter figures out which experiments are compatible all on its own
# You don't have to worry about telling it that KFold=5 is different from KFold=10,
# Or that max_depth=12 is outside of max_depth=Integer(2, 10)

Tested Libraries

Gotchas/FAQs

These are some things that might "getcha"

General:

  • Can't provide initial search points to OptPro?
    • This is intentional. If you want your optimization rounds to start with specific search points (that you haven't recorded yet), simply perform a CVExperiment before initializing your OptPro
    • Assuming the two have the same guideline hyperparameters and the Experiment fits within the search space defined by your OptPro, the optimizer will locate and read in the results of the Experiment
    • Keep in mind, you'll probably want to remove the Experiment after you've done it once, as the results have been saved. Leaving it there will just execute the same Experiment over and over again
  • After changing things in my "HyperparameterHunterAssets" directory, everything stopped working
    • Yeah, don't do that. Especially not with "Descriptions", "Leaderboards", or "TestedKeys"
    • HyperparameterHunter figures out what's going on by reading these files directly.
    • Removing them, or changing their contents can break a lot of HyperparameterHunter's functionality

Keras:

  • Can't find similar Experiments with simple Dense/Activation neural networks?
    • This is likely caused by switching between using a separate Activation layer, and providing a Dense layer with the activation kwarg
    • Each layer is treated as its own little set of hyperparameters (as well as being a hyperparameter, itself), which means that as far as HyperparameterHunter is concerned, the following two examples are NOT equivalent:
      • Dense(10, activation=‘sigmoid’)
      • Dense(10); Activation(‘sigmoid’)
    • We’re working on this, but for now, the workaround is just to be consistent with how you add activations to your models
      • Either use separate Activation layers, or provide activation kwargs to other layers, and stick with it!
  • Can't optimize the model.compile arguments: optimizer and optimizer_params at the same time?
    • This happens because Keras’ optimizers expect different arguments
    • For example, when optimizer=Categorical(['adam', 'rmsprop']), there are two different possible dicts of optimizer_params
    • For now, you can only optimize optimizer, and optimizer_params separately
    • A good way to do this might be to select a few optimizers you want to test, and don’t provide an optimizer_params value. That way, each optimizer will use its default parameters
      • Then you can select which optimizer was the best, and set optimizer=<best optimizer>, then move on to tuning optimizer_params, with arguments specific to the optimizer you selected

CatBoost:

  • Can't find similar Experiments for CatBoost?
    • This may be happening because the default values for the kwargs expected in CatBoost’s model __init__ methods are defined somewhere else, and given placeholder values of None in their signatures
    • Because of this, HyperparameterHunter assumes that the default value for an argument really is None if you don’t explicitly provide a value for that argument
    • This is obviously not the case, but I just can’t seem to figure out where the actual default values used by CatBoost are located, so if anyone knows how to remedy this situation, I would love your help!

hyperparameter_hunter's People

Contributors

beyondacm avatar huntermcgushion avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hyperparameter_hunter's Issues

Leaderboard conflict with aliased, and non-aliased metrics

  • Resolve issue noted in leaderboards.GlobalLeaderboard.add_entry, where aliased metrics names should be merged together based on their equivalent hashes
  • Currently, using SKLearn's roc_auc_score, then using the same function under an alias (like 'roc') would produce two separate columns: 'roc_auc_score', and 'roc'
    • This is despite the fact that the two metrics are, in fact, the same thing

Hide internally-used `experiments.BaseExperiment` methods

  • Hide the following methods of experiments.BaseExperiment that generally shouldn't be used by class instances:
    • additional_preparation_steps
    • initial_preprocessing
    • validate_parameters
    • validate_environment
    • clean_up
    • generate_experiment_id
    • generate_hyperparameter_key
    • create_script_backup
    • initialize_random_seeds
    • random_seed_initializer
    • update_model_params

UninformedOptimizationProtocols need `current_hyperparameters_list`

  • Add current_hyperparameters_list equivalent to optimization_core.UninformedOptimizationProtocol
  • See usages in optimization_core.InformedOptimizationProtocol for proper implementation
  • Only used by optimization_core.BaseOptimizationProtocol for logging in the _optimization_loop method (in which pertinent flag comments are located)
  • This bug breaks the children of UninformedOptimizationProtocol, which is very not good

Clean up `optimization_utils.AskingOptimizer.__init__`

  • Problem: skopt.optimizer.Optimizer.__init__ is copied almost verbatim by optimization_utils.AskingOptimizer.__init__, which is far from ideal
    • This is copied in order to make AskingOptimizer use hyperparameter_hunter.space.Space, rather than skopt.space.Space
  • Need way to tell skopt.optimizer.Optimizer.__init__ to use updated Space, or need to override the particular section of skopt.optimizer.Optimizer.__init__, in which skopt.space.Space is used
  • In its current state, any changes to skopt.optimizer.Optimizer.__init__ will be completely lost, and will need to be manually recreated
  • Solution still needs to accommodate __repeated_ask_kwargs, as noted in the pertinent todo comments and the original optimization_utils.AskingOptimizer.__init__, which is commented out above the current monstrosity

`tell` optimizer positive/negative utility values depending on `target_metric`

  • Update the following methods of optimization_core.InformedOptimizationProtocol:
    • _execute_experiment
    • _find_similar_experiments
  • The two aforementioned methods are the two locations at which optimization_core.InformedOptimizationProtocol.optimizer is "tell-ed" the utility value of a set of hyperparameters
  • Currently, a negative utility value is provided to optimizer, which will cause problems if target_metric should be minimized
    • This is the case when target_metric is some loss measure
  • Need to add a means of specifying that positive utility values should be used, instead of negative, or of detecting that target_metric measures loss

Keras dependence in `models`

  • Remove Keras dependence in models, unless keras.models.load_model required by models.KerasModel.fit
    • This will only be the case if models.KerasModel is actually in use
  • May need to use Keras import hooks from importer inside hyperparameter_hunter.__init__

Separate input/target data for `environment.Environment.__init__`

  • Add ability to provide separate input/target DataFrames for following environment.Environment.__init__ kwargs: train_dataset, holdout_dataset, and test_dataset
  • Accept NumPy arrays, instead of DataFrames
  • Alternative to providing the whole DataFrame, containing a target column

Documentation for `reporting.ReportingHandler`

  • Add documentation for the following reporting.ReportingHandler methods:
    • validate_parameters
    • configure_reporting_type
    • initialize_logging_logging
    • configure_console_logger_handler
    • configure_heartbeat_logger_handler
    • _logging_log
    • _logging_debug
    • _logging_warn

Remove Keras dependence in `key_handler`

  • Remove dependence on keras.callbacks.Callback
  • Only usage in key_handler.KeyMaker.handle_complex_types.visit function
  • Probably need to wire in import hooks, since Keras actually should be used here if a Keras model_initializer is given

Documentation for `models.KerasModel`

  • Finish documentation for the following methods of models.KerasModel:
    • __init__.
      • initialization_params, and extra_params kwargs
    • initialize_model
    • fit,
    • get_input_dim
    • validate_keras_params
    • initialize_keras_neural_network

`models.XGBoostModel.fit` `eval_set` behavior

  • Remove the default inclusion of eval_set in models.XGBoostModel.fit per todo comment
  • This results in unexpectedly long execution times
  • models.XGBoostModel.fit has been commented out, meaning models.Model.fit is being used
  • The updated version of models.XGBoostModel.fit should still accommodate eval_set and eval_metric arguments

Perform Keras layer interception in project's `__init__.py`

  • Perform call to importer.hook_keras_layer near top of __init__.py
  • Currently called before any other imports
    • See examples.keras_example.py for current usage - This will need to be removed
  • Verify hook_keras_layer does not raise any exceptions if Keras has not been installed

Add default hyperparameter search ranges

  • Declare default hyperparameter ranges/selections for certain libraries/algorithms in files named for each library in the hyperparameter_hunter/library_helpers directory
  • These should be used by optimization_core.BaseOptimizationProtocol.add_default_options when completed by #31

Implement `optimization_core.BaseOptimizationProtocol.add_default_options`

  • Complete the optimization_core.BaseOptimizationProtocol.add_default_options method
  • This will need to play nice with the BaseOptimizationProtocol.hyperparameter_space attribute
    • Likely requires updating space.Space to reflect new default options being added to original dimensions (if InformedOptimizationProtocol)
  • The implemented add_default_options should leverage the default hyperparameter search ranges added in #30 for the hyperparameter provided as input and optimization_core.BaseOptimizationProtocol.model_initializer

`n_random_starts` broken in `optimization_core.SKOptimizationProtocol.__init__`

  • Make optimization_core.SKOptimizationProtocol.__init__.n_random_starts actually do something when specified
  • The kwarg is currently ignored if a sufficient number of experiment results have already been read in
    • This makes the SKOptimizationProtocol think the requirement has already been satisfied
  • Random starts are only actually executed when n_random_starts-many result files cannot be located

Keras learning rate recorded incorrectly when decay/scheduling callbacks used

  • See :meth:recorders.DescriptionRecorder.format_result
  • Model's get_config() returns the final learning rate, rather than the initial one, so experiment description files are misleading by not displaying the actual value used
  • Leads to failed similar experiment matches
  • Experiment started with Adam at lr=0.001, and ReduceLROnPlateau, which dropped the lr down to 0.0001
    • 0.0001 was recorded as the Experiment's lr, but it should be 0.001
  • Probably need to call parameterize_compiled_keras_model immediately after initializing it, then store the results, then use them in the DescriptionRecorder
    • Midway through experiments.BaseCVExperiment.cv_run_workflow, or in models.KerasModel.initialize_model/fit

Finish `leaderboards` documentation

  • Add documentation description for leaderboards.Leaderboard.__init__
  • Add documentation for leaderboards.GlobalLeaderboard.add_entry (See Leaderboard implementation)

Documentation for `optimization_core.BaseOptimizationProtocol`

  • Add documentation for following optimization_core.BaseOptimizationProtocol methods:
    • _optimization_loop
    • _update_current_hyperparameters
    • _set_hyperparameter_space
    • _get_current_hyperparameters
    • search_space_size (See InformedOptimizationProtocol implementation)

Finish `experiments.BaseExperiment.__init__` documentation

  • Add documentation for the target_metric kwarg of experiments.BaseExperiment.__init__
  • Label the following experiments.BaseExperiment.__init__ kwargs as experimental while in development: preprocessing_pipeline, preprocessing_params

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.