Comments (17)
I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full.
Awesome! For me, builds involving CUDA are such a mix of pain and incomprehensible magic. Thanks a lot for working your way through this, you do the community a great favor! I really appreciate it!
You're welcome :-) CUDA really is a bit of a nightmare. Let's just hope it works at the end :-) I have not yet had time to actually test the JLL's from Julia.
I argued that it should be no big feat to run some neural networks with ONNXRuntime on Julia - with TensorRT - on Jetson boards ... so I'd better make it happen :-)
from onnxruntime.jl.
Awesome! I also think download on demand is the way to go. If this only adds jll packages as dependencies, I think I would go without Requires.jl, just lazy artifacts.
from onnxruntime.jl.
There is - though I have (had to) split my effort between ONNXRuntime and Torch, so the pace has definitely gone down.
The good news is that TensorRT is now registered and available to be used as a dependency - and I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full. So it shouldn’t be too hard to get ONNXRuntime building with CUDA now, cf. Yggdrasil#4554.
from onnxruntime.jl.
I don't have a clear view of what the possibilities are.
Having to add an additional package to make an execution provider available is fine. Having to set preferences is acceptable but a bit of a pain in that you either have to restart Julia or prepare toml files before starting. Having to load large artifacts that you won't use would be highly annoying.
For my work use cases being able to run on either CPU or GPU is important, and better optimizations through TensorRT are highly interesting. Additional execution providers would be mildly interesting, in particular DirectML.
from onnxruntime.jl.
Wow cool, thanks for the info! Once gpu + Yggdrasil is ready, I would love to use that as a source of onnxruntime binaries.
from onnxruntime.jl.
Excellent :-)
WIP: TensorRT in Yggdrasil JuliaPackaging/Yggdrasil#4347
Since Pkg does not (yet) support conditional dependencies, I am thinking it might be better to have separate JLL's for each Execution Provider, and only download them on demand (using lazy artifacts and/or Require.jl)? E.g.:
- CUDA
- TensorRT
- oneDNN
- MiGraphX
- CoreML
from onnxruntime.jl.
@stemann thanks again for tackling this, Yggdraasil+jll would be much cleaner than my current approach. Is there any progress on the CUDA onnxruntime?
from onnxruntime.jl.
I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full.
Awesome! For me, builds involving CUDA are such a mix of pain and incomprehensible magic. Thanks a lot for working your way through this, you do the community a great favor! I really appreciate it!
from onnxruntime.jl.
BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.
I'm following that approach for JuliaPackaging/Yggdrasil#4554 now; so it should probably be done for ONNXRuntime as well. One could imagine a separate ONNXRuntimeTraining JLL with the training stuff (dependent on Torch).
from onnxruntime.jl.
BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.
Would this mean that it is impossible to have both a CPU and GPU net in a single julia session?
from onnxruntime.jl.
BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.
Would this mean that it is impossible to have both a CPU and GPU net in a single julia session?
Good point. No, I don't think so - it would just be like using the "onnxruntime-gpu" binary that you have now - the CUDA-platform variant would just include both CPU and CUDA, ROCm-platform variant would include CPU and ROCm etc.
from onnxruntime.jl.
Ok got it thanks!
from onnxruntime.jl.
Cf. #19 for a WIP usage of JLL - with platform selection based on platform augmentation (e.g. for CUDA).
from onnxruntime.jl.
@jw3126 @GunnarFarneback Any suggestions for how to handle execution providers in a JLL world, i.e. artifact selection? E.g. from a high-level/user perspective?
I'm sorry that the following got a "bit" vague...
One objective is, of course, cooperation with the CUDA.jl stack, so in the context of CUDA, we should expect the user to use the CUDA_Runtime_jll preferences to define the CUDA version in a LocalPreferences.toml
:
[CUDA_Runtime_jll]
version = "11.8"
Then there are two options:
- Assume that if CUDA can load, the users wants the CUDA/CUDNN/TensorRT-artifact and fetch this - this would be the platform selection implemented for CUDA-platforms - used by CUDNN - and defined in https://github.com/JuliaPackaging/Yggdrasil/blob/master/platforms/cuda.jl#L13-L80
- Or in a more complicated world, the user should still have the option to specify which artifact to fetch, e.g. by specifying an
ONNXRuntime_jll
platform preference in TOML:
[CUDA_Runtime_jll]
version = "11.8"
[ONNXRuntime_jll]
platform = "cuda"
where the user could choose to get a "cpu" artifact (basic onnxruntime main library), an AMD ROCm artifact, or another "cpu" artifact like XNNPACK or Intel oneDNN (alias DNNL). I.e. that the user should have the option to get another artifact/library, even though the user also has a functional CUDA set-up.
Complication: Most EPs are built into the main onnxruntime library - the only exceptions are TensorRT, Intel oneDNN, Intel OpenVINO, and CANN, which are available as shared libraries: https://onnxruntime.ai/docs/build/eps.html#execution-provider-shared-libraries.
This means that it would make sense to provide most EPs through various platform-variants of the ONNXRuntime_jll
artifact(s) - i.e. with the ONNXRuntime_jll
artifact being synonymous with the main onnxruntime library - and with some definition of platform (cuda
might be one platform, rocm
might be another - and then the concept gets quite vague, when one considers "XNNPACK" or "oneDNN" "platforms"...).
TensorRT is likely a special case when it comes to the shared library EPs: It is probably safe to assume that if the user has selected the CUDA-platform artifact, then the user won't mind getting the TensorRT library as well.
from onnxruntime.jl.
I think the high-level user experience should be like this:
pkg> add ONNXRunTime
julia> import ONNXRunTime as ORT
julia> ORT.load_inference(path, execution_provider=:cuda)
Error: # tell the user to add `CUDA_Runtime_jll` and optionally set preferences for that.
pkg> add CUDA_Runtime_jll
julia> ORT.load_inference(path, execution_provider=:cuda)
# Now it works
Personally, I don't need other EPs than CUDA and CPU. If we want to support more EPs, that is fine by me, as long as there is somebody who takes responsibility for maintaining that EP.
So I think we should go for the simplest solution that supports CPU + CUDA + whatever you personally need and feel like maintaining.
from onnxruntime.jl.
I agree wrt. to the scope - the aim is not to support more than CPU and CUDA/CUDNN and TensorRT at this point.
But even with just CPU+CUDA, the user experience with the JLL would be a little different: The ONNXRuntime_jll CUDA-artifact depends on CUDNN_jll
, and TensorRT_jll
, and hence CUDA_Runtime_jll
, so the user should automatically get CUDA_Runtime_jll
if the artifact/platform selection for ONNXRuntime_jll
returns the CUDA-artifact.
So my question was more along: How should the artifact/platform selection work?
from onnxruntime.jl.
Users should definitely not be forced to download or load unneeded artifacts.
Though I still assume, no one using the CUDA EP would object to also getting the TensorRT EP...?
The obvious solution for the shared library EPs, is to put them in separate JLL packages, e.g. ONNXRuntimeProviderOpenVINO_jll
(with each shared library EP JLL being dependent on the main ONNXRuntime_jll
).
For the EPs where the EP is built into the main onnxruntime library, the view is a bit more murky: Either there would be separate mutually-exclusive JLL's, e.g. ONNXRuntime_jll
, ONNXRuntimeProviderDirectML_jll
(where both could not be loaded at the same time) - or some slightly bizarre overload of the "platform" concept would have to be used, e.g. to have both an x86_64-w64-mingw32
artifact for ONNXRuntime_jll
, but also an "x86_64-w64-mingw32-DirectML" artifact... on the other hand there are already MPI platforms and LLVM platforms...
I think I favor the separate JLLs for separate EPs approach - and maybe at some point, onnxruntime upstream will also move more EPs into shared libraries...(?)
from onnxruntime.jl.
Related Issues (14)
- multi-thread friendly? HOT 6
- Loading existing model test data? HOT 1
- Error: could not load library ... onnxruntime.dll ... The specified module could not be found. HOT 3
- Incorrect results for matrix multiplication HOT 7
- Mac M1 support HOT 8
- CUDA 4.0 compatibility HOT 6
- Can't load simple model (with 8bit and 16bit inputs) HOT 5
- Privacy - tracking - data collection HOT 3
- Manual release of memory HOT 1
- Incompatibility with cuDNN 1.3.1 HOT 1
- TagBot trigger issue HOT 13
- windows support HOT 1
- osx support
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from onnxruntime.jl.