Comments (3)
Hello Martin!
No particular reason, I just wanted to understand how flexibel ClearML Serving is. Regarding your second question, I think I already got good outcome from our slack conversation and here: #32
Once you have the new example with custom engine, let me know! Thanks :)
from clearml-serving.
Quick update @Bunoviske
You can now find a custom serving example here 😃
from clearml-serving.
Hi @Bunoviske
I checked the pipeline example where you use custom engine
Yes it is in he to do list to add a specific custom model serving example. I'll update here once it is up
What if I want to run normal pytorch inference without any engine?
So you can just do "pytorch inference" as a custom engine (basically import pytorch and run the model. That said Triton is so much faster than just using vanilla pytorch for inference, and this is basically out of the box. Any reason for manually running the model inference ?
Customizable RestAPI for serving (i.e. allow per model pre/post-processing for easy integration). How can a really customize the RestAPI?
The idea is the fact you can have any python code as pre/post processing, which really customize the entire API. For example, RestAPI gets a url as string, and the preprocessing would download it, return a numpy object, then the inference is done on the image.
What exactly are you aiming for with the customization?
from clearml-serving.
Related Issues (20)
- Configure options for gRPC (Triton server) HOT 3
- Unable to load onnx models into Triton HOT 7
- String not supported for Triotn HOT 1
- Error: Failed loading preprocess code for '<>': No module named 'transformers' HOT 1
- Where to find logging for preprocessing Custom model HOT 2
- triton model breaks serving instance HOT 4
- ValueError: dictionary update sequence element #0 has length <>; 2 is required HOT 2
- Error during creation endpoints with config.pbtxt HOT 4
- Docker image not up-to-date HOT 1
- torchserve support? HOT 2
- serving stuck because of deleted model
- Incorrect requests version specifier in requirements.txt HOT 3
- Shared functions across preprocess file HOT 2
- Removing model monitoring in endpoint HOT 1
- Multiple TensorRT handling(plan file per gpu)
- Could not download model in triton container HOT 1
- [Bug] Run clearml serving with clearml server not working HOT 2
- Can not run Toy model (scikit learn) deployment example
- [clearml-serving] Specification of model platform not possible HOT 3
- Deploy YOLOv8 Segmentation model
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from clearml-serving.