Comments (10)
Runtime compilation is simple:
- We can get PTX code by using a series of functions in
nvcrt
library. - Create
CUModule
with compiled PTX code with 'cuModuleLoadDataEx` function. - Remaining process is same.
However, I got error while using wrap_cuda.jl
to generate new API binding.
khkim@gpuserver:~/CUDArt/gen-6.5$ julia wrap_cuda.jl
WARNING: [a,b] concatenation is deprecated; use [a;b] instead
in depwarn at ./deprecated.jl:40
in oldstyle_vcat_warning at ./abstractarray.jl:26
in sort_includes at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:674
in run at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:749
in include at ./boot.jl:249
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:319
in _start at ./client.jl:403
WRAPPING HEADER: /usr/local/cuda-6.5/include/driver_types.h
WARNING: Not wrapping Clang.cindex.InclusionDirective host_defines.h
WARNING: Not wrapping Clang.cindex.MacroInstantiation __GNUC__
WARNING: Not wrapping Clang.cindex.MacroInstantiation __GNUC__
WARNING: Not wrapping Clang.cindex.MacroInstantiation __device_builtin__
WARNING: Not wrapping Clang.cindex.MacroInstantiation __device_builtin__
.. (warnings for macros)
INFO: Error thrown. Last cursor available in Clang.wrap_c.debug_cursors
ERROR: LoadError: No CLType translation available for: CLType (Clang.cindex.Int128)
in repr_jl at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:281
in wrap at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:486
in wrap_header at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:641
in run at /home/khkim/.julia/v0.4/Clang/src/wrap_c.jl:760
in include at ./boot.jl:249
in include_from_node1 at loading.jl:128
in process_options at ./client.jl:319
in _start at ./client.jl:403
while loading /home/khkim/.julia/v0.4/CUDArt/gen-6.5/wrap_cuda.jl, in expression starting on line 70
Is there any ideas? It seams it has problem with Int128.
I'm using nightly build julia in Ubuntu 14.04.
from cudart.jl.
While I was developing this package, I made quite a number of improvements to Clang.jl: https://github.com/ihnorton/Clang.jl/commits/master. Looks like it might need a few more.
from cudart.jl.
Mapping for Int128
is added in recent master of Clang.jl
and it works like a charm.
from cudart.jl.
IMO the best way forward is to get support for user-selectable LLVM backends into base Julia and then use the normal Julia codegen to compile marked Julia functions using the nvptx backend.
Transpiling Julia code into nvvm or C would be essentially a reimplementation of the existing codegen - it's easy for basic arithmetic expressions, but would quickly break down if you want your kernels to be able to use the full expressiveness of Julia (although the standard library will still be off-limits).
I am actually very excited to get this working - afaik, it would make Julia the first high-level language to support writing kernels directly as normal functions in the language without idiosyncratic restrictions on syntax. This could be a powerful selling point of Julia to the scientific communities that have essentially moved all their heavy computation to the GPU, such as the neural network community.
from cudart.jl.
I thought this only supported cuda 6.5 am I wrong? Could we get 7.5 to work as an example?
from cudart.jl.
from cudart.jl.
@mattcbro I have the same question. I installed CUDA 7.5 from the repositories of my Linux distribution and CUDArt.jl
is having trouble with it. The tests are failing for me.
from cudart.jl.
Did you try @lucasb-eyer's branch in #39?
from cudart.jl.
I will check it @timholy, thanks.
from cudart.jl.
This is now possible with CUDAnative.jl. NVRTC support might still be worthwhile but would need somebody interested in it to actively work on it. Closing for now
from cudart.jl.
Related Issues (20)
- Tests fail on Windows with 0.6 HOT 1
- Info about upcoming removal of packages in the General registry
- Support for ptx modules with external functions HOT 2
- Does CUDArt support cuda 8.0? HOT 1
- triggering gc based on gpu memory
- CUDArt assumptions not robust
- Precompile Error HOT 1
- Intermittent GC-related test failure (`isempty(cuda_ptrs)`) HOT 2
- New tag HOT 2
- Updated build script for visual studio 17 but get compile errors HOT 2
- error could not load library "libnvidia-ml" HOT 4
- Makefile needs to select correct gcc compiler HOT 1
- Rename types to CuArray, CuMatrix and so forth for consistency with CUDAdrv?
- CUDArt should not rely on `nvidia-smi` or `nvml` on Mac OSX HOT 31
- CUDArt fails to build when no CUDA device is present
- gcc5.4.0 support HOT 1
- OOB during package build HOT 7
- No method matching reset(::Cudadrv.CuPrimaryContext) HOT 3
- GCC Version On CUDA 8.0 HOT 3
- Unified Memory support HOT 11
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from cudart.jl.