Comments (6)
This sounds like a documentation issue. In my opinion, libtorch should not be redistributed with CUDA headers, is they are not really needed if one want to use PyTorch with GPU acceleration from their application. But if one wants to build CUDA extensions or use some advanced features, then they must install coda-toolkit and modify CMakeFile to specify path to cuda headers/libraries. Adding module: docs
to perhaps extend/reference cppdocs example to an example that can use both PyTorch and CUDA
cc: @msaroufim
from pytorch.
What is the output of running the following?
cmake -DCMAKE_PREFIX_PATH=`python3 -c 'import torch;print(torch.utils.cmake_prefix_path)'` ..
from pytorch.
/usr/local/cuda/include should include this file. Perhaps, cuda needs to be installed first.
from pytorch.
For me the file is in /opt/cuda/include/cuda_runtime_api.h
, and I could for instance add this to the include path to make this work for me locally.
But nothing else (as far as I know) in pytorch assumes that cuda headers are available at compile time, so I can usually just write code involving cuda, and that code will compile fine even if cuda is not available.
For instance the following compiles without issues (throws an error at runtime of course)
#include <torch/torch.h>
#include <iostream>
int main() {
torch::Device device(torch::kCUDA);
torch::Tensor tensor = torch::rand({2, 3}).to(device);
std::cout << tensor << std::endl;
}
This is quite nice when distributing code, because the question if cuda is available or not can be dealt with at runtime. This suddenly changes when I include the CUDAGraph
header however.
Looks to me like this include here is the cause, and this might for the most part be an oversight that this ended up in the public header?
I just noticed that this issue looks like a duplicate of this by the way: #55454 (sorry for not checking more carefully)
I think I'm running into this problem in very similar circumstances.
from pytorch.
Thanks @aseyboldt for the nice explanation and root-cause. I would like to defer this to @cyyever
It is possible that the new change might need some update on the builder or packaging side.
Also cc @ptrblck @atalman @malfet
from pytorch.
Hm, but shouldn't that then at least be checked in cmake? For instance, the cuda versions of libtorch and the user provided cuda header still need to match, don't they?
I have to admit I completely missed, that it seems I can use c10/core/Stream.h
, which I think should work without cuda headers? Unless I'm missing something that should fix my issues. (And maybe that also works for tch-rs? cc @LaurentMazare, if that is still an issue for you...)
from pytorch.
Related Issues (20)
- [BUG][JIT] `torch.jit.script` is not compatible with `DeprecationWarning` and `FutureWarning` HOT 5
- Fused AdamW maybe should accept lr_dict directly? HOT 1
- Overly demanding PyTorch CLA HOT 1
- UNSTABLE linux-binary-manywheel / manywheel-py3_8-cuda12_1-test / test HOT 2
- UNSTABLE linux-binary-manywheel / manywheel-py3_8-cuda12_4-test / test HOT 2
- I donβt know if itβs a problem with cuda or pytorch HOT 3
- torch.compile (inductor) bug random signed number generation
- [While_loop] How to use layer like `torch.nn.BatchNorm2d` with while_loop? HOT 2
- torch.compile reorder_for_compute_comm_overlap sink_waits pass does not work
- Inductor fails at assert self.symbol_to_source.get(expr) HOT 1
- Add option for custom ops to automatically get a FakeTensor kernel (during static shapes) HOT 1
- DISABLED test_arange2_dynamic_shapes_cuda (__main__.DynamicShapesGPUTests) HOT 1
- DISABLED test_comprehensive_fft_ifft_cuda_float64 (__main__.TestInductorOpInfoCUDA) HOT 1
- ~PyTorch Docathon H1 2024!~
- Dynamo should prune non-live captured variables
- Loading Old Checkpoints with DTensor HOT 2
- `flake8: noqa` disables flake8 linter for the whole file and it's not obvious HOT 2
- torch.export.export() throws out an error when dealing weighttying model.
- [Feature Request] switch amx isa detection in onednn to cpuinfo HOT 1
- DISABLED test_dtensor_op_db_bmm_cpu_float32 (__main__.TestDTensorOpsCPU) HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
π Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. πππ
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google β€οΈ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pytorch.