Coder Social home page Coder Social logo

microsoft / nni Goto Github PK

View Code? Open in Web Editor NEW
13.7K 284.0 1.8K 130.44 MB

An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.

Home Page: https://nni.readthedocs.io

License: MIT License

TypeScript 24.26% Python 73.55% JavaScript 1.21% HTML 0.03% CSS 0.04% Shell 0.11% Dockerfile 0.03% SCSS 0.62% PowerShell 0.15%
automl deep-learning neural-architecture-search hyperparameter-optimization distributed bayesian-optimization automated-machine-learning machine-learning machine-learning-algorithms data-science

nni's Introduction


MIT licensed Issues Bugs Pull Requests Version Documentation Status

NNI automates feature engineering, neural architecture search, hyperparameter tuning, and model compression for deep learning. Find the latest features, API, examples and tutorials in our official documentation (简体中文版点这里).

What's NEW!  

Installation

See the NNI installation guide to install from pip, or build from source.

To install the current release:

$ pip install nni

To update NNI to the latest version, add --upgrade flag to the above commands.

NNI capabilities in a glance

Hyperparameter Tuning Neural Architecture Search Model Compression
Algorithms
Supported Frameworks Training Services Tutorials
Supports
  • PyTorch
  • TensorFlow
  • Scikit-learn
  • XGBoost
  • LightGBM
  • MXNet
  • Caffe2
  • More...

webui

Resources

Contribution guidelines

If you want to contribute to NNI, be sure to review the contribution guidelines, which includes instructions of submitting feedbacks, best coding practices, and code of conduct.

We use GitHub issues to track tracking requests and bugs. Please use NNI Discussion for general questions and new ideas. For questions of specific use cases, please go to Stack Overflow.

Participating discussions via the following IM groups is also welcomed.

Gitter WeChat
image OR image

Over the past few years, NNI has received thousands of feedbacks on GitHub issues, and pull requests from hundreds of contributors. We appreciate all contributions from community to make NNI thrive.

Test status

Essentials

Type Status
Fast test Build Status
Full test - HPO Build Status
Full test - NAS Build Status
Full test - compression Build Status

Training services

Type Status
Local - linux Build Status
Local - windows Build Status
Remote - linux to linux Build Status
Remote - windows to windows Build Status
OpenPAI Build Status
Frameworkcontroller Build Status
Kubeflow Build Status
Hybrid Build Status
AzureML Build Status

Related Projects

Targeting at openness and advancing state-of-art technology, Microsoft Research (MSR) had also released few other open source projects.

  • OpenPAI : an open source platform that provides complete AI model training and resource management capabilities, it is easy to extend and supports on-premise, cloud and hybrid environments in various scale.
  • FrameworkController : an open source general-purpose Kubernetes Pod Controller that orchestrate all kinds of applications on Kubernetes by a single controller.
  • MMdnn : A comprehensive, cross-framework solution to convert, visualize and diagnose deep neural network models. The "MM" in MMdnn stands for model management and "dnn" is an acronym for deep neural network.
  • SPTAG : Space Partition Tree And Graph (SPTAG) is an open source library for large scale vector approximate nearest neighbor search scenario.
  • nn-Meter : An accurate inference latency predictor for DNN models on diverse edge devices.

We encourage researchers and students leverage these projects to accelerate the AI development and research.

License

The entire codebase is under MIT license.

nni's People

Contributors

acured avatar bonytu avatar chenbohua3 avatar chicm-ms avatar colorjam avatar crysple avatar demianzhang avatar dependabot[bot] avatar goooxu avatar hzhua avatar j-shang avatar jiahangxu avatar junweisun avatar kvartet avatar leckie-chn avatar lijiaoa avatar linbinskn avatar liuzhe-lz avatar lvybriage avatar purityfan avatar quanluzhang avatar scarlett2018 avatar sparksnail avatar squirrelsc avatar suiguoxin avatar super-dainiu avatar ultmaster avatar xuehui1991 avatar yds05 avatar zheng-ningxin avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nni's Issues

support search space recommendation

If user provides a dataset and wants to find the best model from some candidate models, it would be better to provide proper search space for each model. To better support this scenario, the new component which recommends good search space can be added into nni.

no error information in some cases

If the validation in ./src/nni_manager/rest_server/restValidationSchemas.ts fails, the experiment will fail then. In this case, there is no error information, and nnictl could not find the experiment's information, such as experiment id.

~/.local/bin not added to path

When running in a docker with root user, the path ~/.local/bin is not added to $PATH
This may cause file missing issue such as
FileNotFoundError: [Errno 2] No such file or directory: 'serve'

Abnormal behavior if start 2 experiments at same time

Opened 2 terminates on a same machine and run the command
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
at same time

expect one of them is succeeded, another is failed, but actually encountered 2 cases

  • case 1
    Both of the 2 commands show
    image

  • case 2
    Both of the 2 commands seem started successfully, the actually no running experiment (the webui can't be opened)

Experiment didn't stop after encountering a critical error (user environment missing tensorflow)

Create an clean Azure VM (Ubuntu 16.04), and follow the Getting Started instruction to try nni, followed the following commands:

pip3 install -v --user git+https://github.com/Microsoft/[email protected]
source ~/.bashrc
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml

and found the below error from ~/nni/experiments/I4eOAq5y/trials/lis4f/stderr

Traceback (most recent call last):
  File "mnist.py", line 6, in <module>
    import tensorflow as tf
ModuleNotFoundError: No module named 'tensorflow'

I found that it is because I missed the requirements tensorflow installed from instruction, so I tried to install tensorflow with pip3 install tensorflow, after that, when I try to run the
nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml
command again, but it promotes

Info: Checking experiment...
Error: There is an experiment running, please stop it first...
Info: You can use 'nnictl stop' command to stop an experiment!

I need manually run nnictl stop to stop the experiment, and create it again.
Why didn't the experiment stop after a critical error?

the WebUI displayed with no content

the starting process seems OK, but there is no content on the web.
screenshot from 2018-09-17 12-46-55

ubuntu@ubuntu-Super-Server:~$ nnictl create --config nni/examples/trials/mnist-annotation/config.yml
Info: Checking experiment...
Info: Starting restful server...
Info: Checking restful server...
Info: Restful server start success!
Info: Setting local config...
Info: Success!
Info: Starting experiment...
Info: Checking web ui...
Info: Starting web ui...
Info: Starting web ui success!
Info: Web UI url: http://127.0.0.1:8080 http://192.168.123.101:8080 http://172.17.0.1:8080
Info: Start experiment success! The experiment id is ImMmO066, and the restful server post is 51188.
You can use these commands to get more information about this experiment:
commands description

  1. nnictl experiment show show the information of experiments
  2. nnictl trial ls list all of trial jobs
  3. nnictl stop stop a experiment
  4. nnictl trial kill kill a trial job by id
  5. nnictl --help get help information about nnictl
  6. nnictl webui url get the url of web ui

stderr.txt

support the feature: multiple instances of SMAC3

SMAC3 supports multiple instances (e.g., different datasets), it evaluates every configuration based on multiple runs on different instances. This is an appealing feature, as tuner could find good configurations that have good generalization.

Run an experiment at local met problem!

According to GetStarted.md ,as follws,

Install NNI through source code

git clone -b v0.1 https://github.com/Microsoft/nni.git
cd nni
chmod +x install.sh
source install.sh

I had sucessfully installed NNI ,But when I run an experiment at local ,do such commad: nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml ,It shows an error,as follws,

Info: Checking experiment...
Info: Starting restful server...
Info: Checking restful server...
Error: Restful server start failed!

Could you help me?

refactor experiment stopping conditions and actions

  1. nnimanager: the experiment only stops when restserver issues stop command. The experiment will not stop when reaching maxDuration/maxTrialNum, just it will not submit new trials, but can receive commands from webui and nnictl (e.g., change maxDuration/maxTrialNum, submit customized trial, etc.).
  2. webui: provide button to extend experiment duration, increase maxTrialNum, stop experiment, show experiment status.

Need Contributing.md for future contributors

Hey, I was going through #114 and trying to solve it and writing code for it. But later realized that the repository has no CONTRIBUTING document, should I go ahead and make a starter one. You guys could suggest any changes that you want in it. This would really help future contributors for reference when they need it.

Help for nnictl is confused

After create an experiment, I want to check the status of the experiment
Firstly, I run the command nnictl --help to see how can I check the status of the experiment,
image
It shows maybe I can use nnictl experiment, but after I run the command nnictl experiment, but it shows
image

It just a loop, I don't know how to check the status of the experiment finally.

Trial jobs need to get a unique ID value

For example, it is a common practice to do k fold data split to improve model performance. In order to do k fold split, each of my trial jobs need to train on a unique split, to do this, my trial jobs need to get a unique ID value.
In short, trial jobs need to get an Id if they need to corporate on something.
There are two possible solutions to implement this:

  1. Use search space, need to add new type, such as sequence, can be implemented in either builtin or customized Tuner.
  2. Add an API, such as nni.get_trial_id()

support package install through nnictl

nni will support more and more automl algorithms. It would be cumbersome to install all the supported algorithms in nni installation. We suggest to add a series of commands in nnictl to manage automl algorithms/packages. For example, nnictl package [install, uninstall, update, ...]

support enas on nni

enas (https://arxiv.org/abs/1802.03268) is Efficient Neural Architecture Search, a fast and inexpensive approach for automatic model design. It would be great to support enas on nni, which makes tuning neural architecture much easier. By running enas on nni, this kind of model design could fully leverage computation resource from training services.

support multi-phase trial tuning

A trial may need multiple phases of hyperparameter/neuralarchitecture tuning. More specifically, in phase one, the trial get some hyperparameters, then run and generate results. Then how to choose the hyperparameters in phase two should depend on the results generated in phase one. Likewise, the hyperparameters in phase three depends on the results in phase two, etc. After all the phases are completed, the trial is finished.

Not support uninstall from pip

Followed the below commands to install nni

pip3 install -v --user git+https://github.com/Microsoft/[email protected]
source ~/.bashrc

then try to uninstall it by

pip3 uninstall NNI

after uninstalled, try to run nnictl, got the below message
image

but expect
image

refactor nni.report_final_result()

In the current version, if users want to use builtin tuners, they are only allowed to report one float number through nni.report_final_result(). But users may want to report multiple numbers besides the float number for builtin tuner, and show those number on webui.
To support this requirement, for builtin tuner we also allow users to send a python object (in the current version, nni.report_final_result also allow users to report an object/structure, but builtin tuners cannot parse the object), the object should have a key named "default" and the value is the float number for tuner. Users are free to add other key/values to this object. On webui, users can choose the additional key/values to be shown in the trial status table.

support experiment status (SUCCESS, FAILED, etc.)

In the current version, nnictl supports viewing experiment profile, all the trials and their status. However, it is not convenient to know whether the experiment is finished or not both through nnictl and through webui. It would be better to add one more command in nnictl, for example, nnictl experiment status.

Remove node DeprecationWarning in log

(node:5681) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.

Experiment didn't stop after reaching max trial number

Try to create an experiment following the basic example

nnictl create --config ~/nni/examples/trials/mnist-annotation/config.yml

the max trial number is 1 in the config.yml

image

but after the only 1 trial is succeeded, the experiment didn't stop
I try to create it again, but got the below error

image

Got "Segmentation fault" when run make (reproducibility)

Followed the "How to contribute" document, and run the below commands:

  1. git clone https://github.com/Microsoft/nni.git
  2. make install-dependencies
  3. make build
  4. make dev-install
  5. nnictl create --config ~/nni/examples/trials/mnist/config.yml

then, change some code, and run the 3rd and 4th command again, and got the below error

image

support SMAC3

SMAC3 (https://github.com/automl/SMAC3) is an hyperparameter tuning algorithm, which is specifically good at categorical hyperparameters. auto-sklearn also uses SMAC3 as their hyperparameter tuner. nni will support SMAC3 as a new tuner, so that users could choose SMAC3 to tune their models.

show more readable content in the generated search space file

In v0.1 and v0.2, if users use nni annotation in their trial code and use for example @nni.function_choice in their code, nni will use different numbers to designate different candidate functions from @nni.function_choice in the generated search space file. Thus, in webui, users only get the numbers instead of the real function names, which is very unfriendly. It would be great to use function names in the generated search space file.

No debugging mode

I want to contribute code to nni_manager, but don't know how to run it in debug mode.
The only way is install nni first, then replace some code in distributed folder.
Need to be more friendly for contributors.

Installation for nni from source code on Azure B1s VM is stuck

Create a B1s type Azure VM (Ubuntu 16.04), and try to follow the command

git clone https://github.com/Microsoft/[email protected]
cd nni
make easy-install

to install nni, but it stuck, the last message is

$ react-app-rewired build --scripts-version react-scripts-ts
    Creating an optimized production build...
    Starting type checking and linting service...
    Using 1 worker with 2048MB memory limit
    ts-loader: Using [email protected] and /tmp/pip-vcid8cpq-build/src/webui/tsconfig.json
    No valid rules have been specified

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.