Coder Social home page Coder Social logo

Comments (17)

stemann avatar stemann commented on July 29, 2024 2

I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full.

Awesome! For me, builds involving CUDA are such a mix of pain and incomprehensible magic. Thanks a lot for working your way through this, you do the community a great favor! I really appreciate it!

You're welcome :-) CUDA really is a bit of a nightmare. Let's just hope it works at the end :-) I have not yet had time to actually test the JLL's from Julia.

I argued that it should be no big feat to run some neural networks with ONNXRuntime on Julia - with TensorRT - on Jetson boards ... so I'd better make it happen :-)

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024 1

Awesome! I also think download on demand is the way to go. If this only adds jll packages as dependencies, I think I would go without Requires.jl, just lazy artifacts.

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024 1

There is - though I have (had to) split my effort between ONNXRuntime and Torch, so the pace has definitely gone down.

The good news is that TensorRT is now registered and available to be used as a dependency - and I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full. So it shouldn’t be too hard to get ONNXRuntime building with CUDA now, cf. Yggdrasil#4554.

from onnxruntime.jl.

GunnarFarneback avatar GunnarFarneback commented on July 29, 2024 1

I don't have a clear view of what the possibilities are.

Having to add an additional package to make an execution provider available is fine. Having to set preferences is acceptable but a bit of a pain in that you either have to restart Julia or prepare toml files before starting. Having to load large artifacts that you won't use would be highly annoying.

For my work use cases being able to run on either CPU or GPU is important, and better optimizations through TensorRT are highly interesting. Additional execution providers would be mildly interesting, in particular DirectML.

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

Wow cool, thanks for the info! Once gpu + Yggdrasil is ready, I would love to use that as a source of onnxruntime binaries.

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

Excellent :-)

WIP: TensorRT in Yggdrasil JuliaPackaging/Yggdrasil#4347

Since Pkg does not (yet) support conditional dependencies, I am thinking it might be better to have separate JLL's for each Execution Provider, and only download them on demand (using lazy artifacts and/or Require.jl)? E.g.:

  • CUDA
  • TensorRT
  • oneDNN
  • MiGraphX
  • CoreML

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

@stemann thanks again for tackling this, Yggdraasil+jll would be much cleaner than my current approach. Is there any progress on the CUDA onnxruntime?

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

I’ve just managed to battle CMake into finding CUDA in the BB cross-compilation env. with CUDA_full.

Awesome! For me, builds involving CUDA are such a mix of pain and incomprehensible magic. Thanks a lot for working your way through this, you do the community a great favor! I really appreciate it!

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.

I'm following that approach for JuliaPackaging/Yggdrasil#4554 now; so it should probably be done for ONNXRuntime as well. One could imagine a separate ONNXRuntimeTraining JLL with the training stuff (dependent on Torch).

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.

Would this mean that it is impossible to have both a CPU and GPU net in a single julia session?

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

BTW: @vchuravy argued, JuliaPackaging/Yggdrasil#4477 (comment) , that it would be better to go with platform variants than separating e.g. Torch (CPU-only) and Torch with CUDA into separate packages.

Would this mean that it is impossible to have both a CPU and GPU net in a single julia session?

Good point. No, I don't think so - it would just be like using the "onnxruntime-gpu" binary that you have now - the CUDA-platform variant would just include both CPU and CUDA, ROCm-platform variant would include CPU and ROCm etc.

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

Ok got it thanks!

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

Cf. #19 for a WIP usage of JLL - with platform selection based on platform augmentation (e.g. for CUDA).

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

@jw3126 @GunnarFarneback Any suggestions for how to handle execution providers in a JLL world, i.e. artifact selection? E.g. from a high-level/user perspective?

I'm sorry that the following got a "bit" vague...

One objective is, of course, cooperation with the CUDA.jl stack, so in the context of CUDA, we should expect the user to use the CUDA_Runtime_jll preferences to define the CUDA version in a LocalPreferences.toml:

[CUDA_Runtime_jll]
version = "11.8"

Then there are two options:

  1. Assume that if CUDA can load, the users wants the CUDA/CUDNN/TensorRT-artifact and fetch this - this would be the platform selection implemented for CUDA-platforms - used by CUDNN - and defined in https://github.com/JuliaPackaging/Yggdrasil/blob/master/platforms/cuda.jl#L13-L80
  2. Or in a more complicated world, the user should still have the option to specify which artifact to fetch, e.g. by specifying an ONNXRuntime_jll platform preference in TOML:
[CUDA_Runtime_jll]
version = "11.8"

[ONNXRuntime_jll]
platform = "cuda"

where the user could choose to get a "cpu" artifact (basic onnxruntime main library), an AMD ROCm artifact, or another "cpu" artifact like XNNPACK or Intel oneDNN (alias DNNL). I.e. that the user should have the option to get another artifact/library, even though the user also has a functional CUDA set-up.

Complication: Most EPs are built into the main onnxruntime library - the only exceptions are TensorRT, Intel oneDNN, Intel OpenVINO, and CANN, which are available as shared libraries: https://onnxruntime.ai/docs/build/eps.html#execution-provider-shared-libraries.
This means that it would make sense to provide most EPs through various platform-variants of the ONNXRuntime_jll artifact(s) - i.e. with the ONNXRuntime_jll artifact being synonymous with the main onnxruntime library - and with some definition of platform (cuda might be one platform, rocm might be another - and then the concept gets quite vague, when one considers "XNNPACK" or "oneDNN" "platforms"...).

TensorRT is likely a special case when it comes to the shared library EPs: It is probably safe to assume that if the user has selected the CUDA-platform artifact, then the user won't mind getting the TensorRT library as well.

from onnxruntime.jl.

jw3126 avatar jw3126 commented on July 29, 2024

I think the high-level user experience should be like this:

pkg> add ONNXRunTime

julia> import ONNXRunTime as ORT

julia> ORT.load_inference(path, execution_provider=:cuda)
Error: # tell the user to add `CUDA_Runtime_jll` and optionally set preferences for that. 

pkg> add CUDA_Runtime_jll

julia> ORT.load_inference(path, execution_provider=:cuda)
# Now it works

Personally, I don't need other EPs than CUDA and CPU. If we want to support more EPs, that is fine by me, as long as there is somebody who takes responsibility for maintaining that EP.

So I think we should go for the simplest solution that supports CPU + CUDA + whatever you personally need and feel like maintaining.

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

I agree wrt. to the scope - the aim is not to support more than CPU and CUDA/CUDNN and TensorRT at this point.

But even with just CPU+CUDA, the user experience with the JLL would be a little different: The ONNXRuntime_jll CUDA-artifact depends on CUDNN_jll, and TensorRT_jll, and hence CUDA_Runtime_jll, so the user should automatically get CUDA_Runtime_jll if the artifact/platform selection for ONNXRuntime_jll returns the CUDA-artifact.

So my question was more along: How should the artifact/platform selection work?

from onnxruntime.jl.

stemann avatar stemann commented on July 29, 2024

Users should definitely not be forced to download or load unneeded artifacts.

Though I still assume, no one using the CUDA EP would object to also getting the TensorRT EP...?

The obvious solution for the shared library EPs, is to put them in separate JLL packages, e.g. ONNXRuntimeProviderOpenVINO_jll (with each shared library EP JLL being dependent on the main ONNXRuntime_jll).

For the EPs where the EP is built into the main onnxruntime library, the view is a bit more murky: Either there would be separate mutually-exclusive JLL's, e.g. ONNXRuntime_jll, ONNXRuntimeProviderDirectML_jll (where both could not be loaded at the same time) - or some slightly bizarre overload of the "platform" concept would have to be used, e.g. to have both an x86_64-w64-mingw32 artifact for ONNXRuntime_jll, but also an "x86_64-w64-mingw32-DirectML" artifact... on the other hand there are already MPI platforms and LLVM platforms...

I think I favor the separate JLLs for separate EPs approach - and maybe at some point, onnxruntime upstream will also move more EPs into shared libraries...(?)

from onnxruntime.jl.

Related Issues (14)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.