Comments (6)
Is shared memory in terms of NVIDIA triton difference from CUDA shared memory?
Yes. CUDA Shared memory is a Triton terminology for transferring CUDA tensors between client and server without having to pass them over the network.
But what it the reason for the increased performance?
The reason for performance improvement is that you don't have to transfer the tensor over the network. The benefit would be more significant with larger tensors.
from server.
Thank you for the answer @Tabrizian!
perf_analyzer -m defect-classifier -u triton:8500 -i gRPC --concurrency-range=1
Concurrency: 1, throughput: 184.18 infer/sec, latency 5425 usec
vs
perf_analyzer -m defect-classifier -u triton:8500 -i gRPC --shared-memory=cuda --concurrency-range=1
Concurrency: 1, throughput: 555.442 infer/sec, latency 1797 usec
It seems to make a huge difference in our use case. However we never seem to be able to get the same throughput for our own Python client. Is there any best practices in terms of client implementation in Python or C++ to achieve similar results as pref_analyzer
. Does pref_analyzer
throughput also include transfering data back to CPU when cuda share memory is used?
from server.
Does pref_analyzer times also include transfering data back to CPU when cuda share memory is used?
@matthewkotila / @tgerdesnv do know whether Perf Analyzer includes the time to copy data to CUDA shared memory?
@NikeNano For Python clients, did you also use CUDA shared memory?
from server.
@NikeNano: Does pref_analyzer throughput also include transfering data back to CPU when cuda share memory is used?
I'm not sure if I understand. The calculation for throughput simply counts how many inferences (request-response sets) were completed during a period of time, and divides by the period of time. Everything that has to happen in order for the inference to complete (including CUDA shared memory, CPU transfers, etc) is inherently included in that measurement.
from server.
@NikeNano For Python clients, did you also use CUDA shared memory?
Yes, we are trying to reimplement it in c++ as well but our feeling now is that we somehow are bottlenecked and are very far from the pref_analyzer
performance.
from server.
@NikeNano: Does pref_analyzer throughput also include transfering data back to CPU when cuda share memory is used?
I'm not sure if I understand. The calculation for throughput simply counts how many inferences (request-response sets) were completed during a period of time, and divides by the period of time. Everything that has to happen in order for the inference to complete (including CUDA shared memory, CPU transfers, etc) is inherently included in that measurement.
Questions for clarification when using pref_analyzer
with shared cuda memory(based upon the python example):
- Do we allocate cuda memory once, one call to
cudashm.create_shared_memory_region
or reallocate for each request? - Do we move data from CPU to the GPU region for each request
cudashm.set_shared_memory_region(shm_ip0_handle, [data])
for each request? - Do we move the data back from GPU to CPU for each request
cudashm.get_contents_as_numpy
?
Based upon your previous answer @matthewkotila , I understand that the answer is Yes.
Thanks for the help.
from server.
Related Issues (20)
- Automatically unload (oldest) models when memory is full HOT 2
- Specific structure for ensemble model may causes deadlock
- Windows 10 docker build Error "Could not locate a complete Visual Studio instance" HOT 2
- A Confusion about prefetch HOT 2
- What is the correct way to run inference in parallel in Triton?
- Support histogram custom metric in Python backend HOT 2
- Backend support for .keras files?
- triton-inference-server cannot be started HOT 1
- Incorrect asset tritonserver2.35.0-jetpack5.1.2-update-2.tgz HOT 1
- How does Triton implement one instance to handle multiple requests simultaneously? HOT 1
- ONNX backend with TensorRT optimizer sometimes fails to start HOT 1
- Any example of triton-vllm with c++ client?
- Tritonserver for FIL backend not starting HOT 1
- Why is my model in ensemble receiving out-of-order input HOT 2
- Add TT-Metalium as a backend
- unexpected datatype TYPE_INT64 for inference input ,expecting TYPE_INT32 HOT 1
- triton malloc fail HOT 7
- Peaks in instantaneous traffic lead to high TP99 inference latency. HOT 2
- Low QPS with momentary traffic surges cause significant increases in inference TP99 latency.
- Single docker layer is too large
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from server.