Coder Social home page Coder Social logo

mlflow-torchserve's Introduction

MLflow: A Machine Learning Lifecycle Platform

MLflow is a platform to streamline machine learning development, including tracking experiments, packaging code into reproducible runs, and sharing and deploying models. MLflow offers a set of lightweight APIs that can be used with any existing machine learning application or library (TensorFlow, PyTorch, XGBoost, etc), wherever you currently run ML code (e.g. in notebooks, standalone applications or the cloud). MLflow's current components are:

  • MLflow Tracking: An API to log parameters, code, and results in machine learning experiments and compare them using an interactive UI.
  • MLflow Projects: A code packaging format for reproducible runs using Conda and Docker, so you can share your ML code with others.
  • MLflow Models: A model packaging format and tools that let you easily deploy the same model (from any ML library) to batch and real-time scoring on platforms such as Docker, Apache Spark, Azure ML and AWS SageMaker.
  • MLflow Model Registry: A centralized model store, set of APIs, and UI, to collaboratively manage the full lifecycle of MLflow Models.

Latest Docs Apache 2 License Total Downloads Slack Account Twitter

Packages

PyPI PyPI - mlflow PyPI - mlflow-skinny
conda-forge Conda - mlflow Conda - mlflow-skinny
CRAN CRAN - mlflow
Maven Central Maven Central - mlflow-client Maven Central - mlflow-parent Maven Central - mlflow-scoring Maven Central - mlflow-spark

Job Statuses

Examples Action Status cross-version-tests r-devel test-requirements stale push-images slow-tests website-e2e

Installing

Install MLflow from PyPI via pip install mlflow

MLflow requires conda to be on the PATH for the projects feature.

Nightly snapshots of MLflow master are also available here.

Install a lower dependency subset of MLflow from PyPI via pip install mlflow-skinny Extra dependencies can be added per desired scenario. For example, pip install mlflow-skinny pandas numpy allows for mlflow.pyfunc.log_model support.

Documentation

Official documentation for MLflow can be found at https://mlflow.org/docs/latest/index.html.

Roadmap

The current MLflow Roadmap is available at https://github.com/mlflow/mlflow/milestone/3. We are seeking contributions to all of our roadmap items with the help wanted label. Please see the Contributing section for more information.

Community

For help or questions about MLflow usage (e.g. "how do I do X?") see the docs or Stack Overflow.

To report a bug, file a documentation issue, or submit a feature request, please open a GitHub issue.

For release announcements and other discussions, please subscribe to our mailing list ([email protected]) or join us on Slack.

Running a Sample App With the Tracking API

The programs in examples use the MLflow Tracking API. For instance, run:

python examples/quickstart/mlflow_tracking.py

This program will use MLflow Tracking API, which logs tracking data in ./mlruns. This can then be viewed with the Tracking UI.

Launching the Tracking UI

The MLflow Tracking UI will show runs logged in ./mlruns at http://localhost:5000. Start it with:

mlflow ui

Note: Running mlflow ui from within a clone of MLflow is not recommended - doing so will run the dev UI from source. We recommend running the UI from a different working directory, specifying a backend store via the --backend-store-uri option. Alternatively, see instructions for running the dev UI in the contributor guide.

Running a Project from a URI

The mlflow run command lets you run a project packaged with a MLproject file from a local path or a Git URI:

mlflow run examples/sklearn_elasticnet_wine -P alpha=0.4

mlflow run https://github.com/mlflow/mlflow-example.git -P alpha=0.4

See examples/sklearn_elasticnet_wine for a sample project with an MLproject file.

Saving and Serving Models

To illustrate managing models, the mlflow.sklearn package can log scikit-learn models as MLflow artifacts and then load them again for serving. There is an example training application in examples/sklearn_logistic_regression/train.py that you can run as follows:

$ python examples/sklearn_logistic_regression/train.py
Score: 0.666
Model saved in run <run-id>

$ mlflow models serve --model-uri runs:/<run-id>/model

$ curl -d '{"dataframe_split": {"columns":[0],"index":[0,1],"data":[[1],[-1]]}}' -H 'Content-Type: application/json'  localhost:5000/invocations

Note: If using MLflow skinny (pip install mlflow-skinny) for model serving, additional required dependencies (namely, flask) will need to be installed for the MLflow server to function.

Official MLflow Docker Image

The official MLflow Docker image is available on GitHub Container Registry at https://ghcr.io/mlflow/mlflow.

export CR_PAT=YOUR_TOKEN
echo $CR_PAT | docker login ghcr.io -u USERNAME --password-stdin
# Pull the latest version
docker pull ghcr.io/mlflow/mlflow
# Pull 2.2.1
docker pull ghcr.io/mlflow/mlflow:v2.2.1

Contributing

We happily welcome contributions to MLflow. We are also seeking contributions to items on the MLflow Roadmap. Please see our contribution guide to learn more about contributing to MLflow.

Core Members

MLflow is currently maintained by the following core members with significant contributions from hundreds of exceptionally talented community members.

mlflow-torchserve's People

Contributors

ankan94 avatar arvind-ideas2it avatar chauhang avatar hamidshojanazeri avatar harupy avatar karthik-77 avatar kasirajana avatar shrinath-suresh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mlflow-torchserve's Issues

[FR] Compatibility with MLflow 2.0

Thank you for submitting a feature request. Before proceeding, please review MLflow's Issue Policy for feature requests and the MLflow Contributing Guide.

Please fill in this feature request template to ensure a timely and thorough response.

Willingness to contribute

The MLflow Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature (as an enhancement to the MLflow TorchServe Deployment plugin code base)?

  • Yes. I can contribute this feature independently.
  • Yes. I would be willing to contribute this feature with guidance from the MLflow community.
  • No. I cannot contribute this feature at this time.

Proposal Summary

In MLflow 2.0 (scheduled for release on Nov. 14), we will be making small modifications to the MLflow Model Server's RESTful scoring protocol (documented here: https://output.circle-artifacts.com/output/job/bb07270e-1101-421c-901c-01e72bc7b6df/artifacts/0/docs/build/html/models.html#deploy-mlflow-models) and the MLflow Deployment Client predict() API (documented here: https://output.circle-artifacts.com/output/job/bb07270e-1101-421c-901c-01e72bc7b6df/artifacts/0/docs/build/html/python_api/mlflow.deployments.html#mlflow.deployments.BaseDeploymentClient.predict).

For compatibility with MLflow 2.0, the mlflow-torchserve plugin will need to be updated to conform to the new scoring protocol and Deployment Client interface. The MLflow maintainers are happy to assist with this process, and we apologize for the short notice.

Motivation

  • What is the use case for this feature? Provide a richer, more extensible scoring protocol and broaden the deployment client prediction interface beyond dataframe inputs.
  • Why is this use case valuable to support for MLflow TorchServe Deployment plugin users in general? Necessary for compatibility for MLflow 2.0
  • Why is this use case valuable to support for your project(s) or organization? ^
  • Why is it currently difficult to achieve this use case? Without these changes, the mlflow-torchserve plugin will break in MLflow 2.0.

What component(s) does this feature affect?

Components

  • area/deploy: Main deployment plugin logic
  • area/build: Build and test infrastructure for MLflow TorchServe Deployment Plugin
  • area/docs: MLflow TorchServe Deployment Plugin documentation pages
  • area/examples: Example code

Error while deploying with torchserve - Exception: Unable to create mar file

Hi,
I'm using 'BertNewsClassification' example to try torchserve with mlflow. But while deploying the model, I'm getting error like : Exception: Unable to create mar file. Here is the full trace of the error -

mlflow deployments create -t torchserve -m file:///C:/Users/saichandra.pandraju/Desktop/torch_models/ --name news_classification_test -C "VERSION=1.0" -C "MODEL_FILE=news_classifier.py" -C "HANDLER=news_classifier_handler.py" -C "EXPORT_PATH=C:\Users\saichandra.pandraju\Desktop\torch_serve\model_store\"
ERROR - Given export-path C:\Users\saichandra.pandraju\Desktop\torch_serve\model_store --model-file news_classifier.py --extra-files 'C:\Users\saichandra.pandraju\Desktop\torch_models\extra_files/class_mapping.json,C:\Users\saichandra.pandraju\Desktop\torch_models\extra_files/bert_base_uncased_vocab.txt' -r C:\Users\saichandra.pandraju\Desktop\torch_models\requirements.txt is not a directory. Point to a valid export-path directory.
Error when attempting to load and parse JSON cluster spec from file torch-model-archiver --force --model-name news_classification_test --version 1.0 --serialized-file C:\Users\saichandra.pandraju\Desktop\torch_models\data\model.pth --handler news_classifier_handler.py --export-path C:\Users\saichandra.pandraju\Desktop\torch_serve\model_store" --model-file news_classifier.py --extra-files 'C:\Users\saichandra.pandraju\Desktop\torch_models\extra_files/class_mapping.json,C:\Users\saichandra.pandraju\Desktop\torch_models\extra_files/bert_base_uncased_vocab.txt' -r C:\Users\saichandra.pandraju\Desktop\torch_models\requirements.txt
Traceback (most recent call last):
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "C:\Users\saichandra.pandraju\.conda\envs\mlflow\Scripts\mlflow.exe\__main__.py", line 7, in <module>
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 1137, in __call__
    return self.main(*args, **kwargs)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 1062, in main
    rv = self.invoke(ctx)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 1668, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\click\core.py", line 763, in invoke
    return __callback(*args, **kwargs)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\mlflow\deployments\cli.py", line 132, in create_deployment
    deployment = client.create_deployment(name, model_uri, flavor, config=config_dict)
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\mlflow_torchserve\__init__.py", line 104, in create_deployment
    model_uri=model_uri,
  File "c:\users\saichandra.pandraju\.conda\envs\mlflow\lib\site-packages\mlflow_torchserve\__init__.py", line 381, in __generate_mar_file
    raise Exception("Unable to create mar file")
Exception: Unable to create mar file

Pls let me know how to proceed further..

MLFlow model deplyoments error when deploying PyTorch model from GCS bucket - ModuleNotFoundError: No module named 'models'

Willingness to contribute

The MLflow Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the MLflow code base?

  • Yes. I can contribute a fix for this bug independently.
  • Yes. I would be willing to contribute a fix for this bug with guidance from the MLflow community.
  • No. I cannot contribute a bug fix at this time.

System information

  • Have I written custom code (as opposed to using a stock example script provided in MLflow): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 18.04): Debian 4.19.194-3 (2021-07-18) x86_64 GNU/Linux
  • MLflow installed from (source or binary): binary
  • MLflow version (run mlflow --version): 1.19.0
  • MLflow TorchServe Deployment plugin installed from (source or binary): binary
  • MLflow TorchServe Deployment plugin version (run mlflow deployments--version): 0.1.0
  • TorchServe installed from (source or binary): binary
  • TorchServe version (run torchserve --version): 0.4.2
  • Python version: 3.9.6
  • Exact command to reproduce: mlflow deployments create -t torchserve -m gs://<model_bucket>/models/classnet/48d548cc841d4c2b9a06e975dec88c8e/artifacts/classnet_model --name classnet -C 'MODEL_FILE=models/classnet.py' -C 'HANDLER=model_handler.py' -C 'EXTRA_FILES=transforms.py,artifacts/models/desnse_depth.pt,models/dense_depth.py'

Describe the problem

I have trained a custom PyTorch models for an image classification problem. The model is logged to a Google Cloud Storage bucket. When I try to deploy the model to torchserve I get: ModuleNotFoundError: No module named 'models' error. From what I understand mlflow.pytorch.log_model() calls torch.save(model) internally. This creates a dependency on the directory structure 18325 .

Code to reproduce issue

I have saved the MLFlow model on GCS bucket using the below script:
mlflow.pytorch.log_model(model, "{}_model".format('livenet'))

The model deployment is using the command below:
mlflow deployments create -t torchserve -m gs://<model_bucket>/models/classnet/48d548cc841d4c2b9a06e975dec88c8e/artifacts/classnet_model --name classnet -C 'MODEL_FILE=models/classnet.py' -C 'HANDLER=model_handler.py' -C 'EXTRA_FILES=transforms.py,artifacts/models/desnse_depth.pt,models/dense_depth.py'

Other info / logs

2021-08-30 10:13:37,521 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG - /tmp/models/555ed568ad5f4fb4a4ebe1b231e298fb/model.pth
2021-08-30 10:13:37,523 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG - <class 'livenet.LiveNet'>
2021-08-30 10:13:38,338 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG - Backend worker process died.
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG - Traceback (most recent call last):
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/model_service_worker.py", line 183, in <module>
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     worker.run_server()
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/model_service_worker.py", line 155, in run_server
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     self.handle_connection(cl_socket)
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/model_service_worker.py", line 117, in handle_connection
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     service, result, code = self.load_model(msg)
2021-08-30 10:13:38,339 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/model_service_worker.py", line 90, in load_model
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     service = model_loader.load(model_name, model_dir, handler, gpu, batch_size, envelope)
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/model_loader.py", line 110, in load
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     initialize_fn(service.context)
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/torch_handler/vision_handler.py", line 20, in initialize
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     super().initialize(context)
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/torch_handler/base_handler.py", line 69, in initialize
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     self.model = self._load_pickled_model(model_dir, model_file, model_pt_path)
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/ts/torch_handler/base_handler.py", line 133, in _load_pickled_model
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     state_dict = torch.load(model_pt_path, map_location=self.device)
2021-08-30 10:13:38,340 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/torch/serialization.py", line 607, in load
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/torch/serialization.py", line 882, in _load
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     result = unpickler.load()
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -   File "/opt/conda/envs/vkyc/lib/python3.9/site-packages/torch/serialization.py", line 875, in find_class
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG -     return super().find_class(mod_name, name)
2021-08-30 10:13:38,341 [INFO ] W-9000-spoofnet_1.0-stdout MODEL_LOG - ModuleNotFoundError: No module named 'models'

What component(s) does this bug affect?

Components

  • area/deploy: Main deployment plugin logic
  • area/build: Build and test infrastructure for MLflow TorchServe Deployment Plugin
  • area/docs: MLflow TorchServe Deployment Plugin documentation pages
  • area/examples: Example code

[DOC-FIX] `model_store` different between torchserve and mlflow-torchserve

When first trying to get the BertNewsClassification example working, I started torchserve in the root directory, then tried to run mlflow deployments create in the examples directory. This resulted in the model_store directories being different between torchserve and mlflow-torchserve which resulted in a Unable to register the model error. However, the error messages that are printed out by torchserve didn’t point me towards this obvious mistake. I think one simple way that could guide new users towards realising this disconnect before doing it could be adding an example where the EXPORT_PATH is explicitly set.

mlflow deployments create -t torchserve -m <model uri> --name DEPLOYMENT_NAME -C 'MODEL_FILE=<model file path>' -C 'HANDLER=<handler file path>' -C 'EXPORT_PATH=<torchserve model_store path>'

We ran into this problem again though when containerising our setup, and again the unhelpful torchserve errors didn’t guide us to the problem, we only realised the issue because we had made the above mistake earlier. Would it be worth adding a warning here https://github.com/mlflow/mlflow-torchserve/blob/master/mlflow_torchserve/__init__.py#L352 to let users know that the directory is being created and might not be doing what they want.
Would there be any other way to sanity check that the torchserve model_store directory lines up with the one being used by mlflow-torchserve to catch these errors before even starting? Or should it fall to torchserve to give a better warning that the model_store is empty?

Here is the error output from torchserve and mlflow-torchserve when this happens:

2021-03-02 21:50:30,422 [INFO ] epollEventLoopGroup-3-1 ACCESS_LOG - /127.0.0.1:33028 "GET /models/news_classification_test/all HTTP/1.1" 404 9
2021-03-02 21:50:30,422 [INFO ] epollEventLoopGroup-3-1 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:john-XPS-13-9370,timestamp:null
/home/john/Documents/mlops-engineeringlab/mlflow-torchserve/examples/BertNewsClassification/model_store/news_classification_test.mar file generated successfully
2021-03-02 21:50:51,542 [INFO ] epollEventLoopGroup-3-2 ACCESS_LOG - /127.0.0.1:33032 "POST /models?url=/home/john/Documents/mlops-engineeringlab/mlflow-torchserve/examples/BertNewsClassification/model_store/news_classification_test.mar&initial_workers=1 HTTP/1.1" 404 5
2021-03-02 21:50:51,542 [INFO ] epollEventLoopGroup-3-2 TS_METRICS - Requests4XX.Count:1|#Level:Host|#hostname:john-XPS-13-9370,timestamp:null
Traceback (most recent call last):
  File "/home/john/anaconda3/envs/torchserve/bin/mlflow", line 8, in <module>
    sys.exit(cli())
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 829, in __call__
    return self.main(*args, **kwargs)
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 782, in main
    rv = self.invoke(ctx)
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 1259, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 1066, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/click/core.py", line 610, in invoke
    return callback(*args, **kwargs)
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/mlflow/deployments/cli.py", line 132, in create_deployment
    deployment = client.create_deployment(name, model_uri, flavor, config=config_dict)
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/mlflow_torchserve/__init__.py", line 114, in create_deployment
    self.__register_model(
  File "/home/john/anaconda3/envs/torchserve/lib/python3.9/site-packages/mlflow_torchserve/__init__.py", line 408, in __register_model
    raise Exception("Unable to register the model")
Exception: Unable to register the model

Willingness to contribute

The MLflow Community encourages documentation fix contributions. Would you or another member of your organization be willing to contribute a fix for this documentation issue to the MLflow TorchServe Deployment plugin code base?

  • Yes. I can contribute a documentation fix independently.
  • Yes. I would be willing to contribute a document fix with guidance from the MLflow community.
  • No. I cannot contribute a documentation fix at this time.

URL(s) with the issue:

https://github.com/mlflow/mlflow-torchserve#create-deployment

Description of proposal (what needs changing):

  • Add an explicit mention of EXPORT_PATH in documentation to highlight the fact the the torchserve model_store directory needs to line up with the directory mlflow-torchserve is run in.
  • Potentially add a warning or info when the plugin is creating the model_store directory
  • Are there any ways to sanity check the model_store for the plugin lines up with the running torchserve instance?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.