tmcdonell / cuda Goto Github PK
View Code? Open in Web Editor NEWHaskell FFI bindings to CUDA
License: Other
Haskell FFI bindings to CUDA
License: Other
I see 0.5.1.1 on hackage but the last git commit was earlier and there is no tag.
I don't know how else to put it but this library isn't Windows 10 compatible, there should be a note in the readme about this but using nm and ld on Windows is just going to give you nightmares like it's given me and even with hardcoding you can't build this library, please put a warning as to the fact this library is built in a Linux Environment, for Linux, and tested in Linux where, therefore, using it on Windows it's an uphill battle, No comment on Mac compatibility.
hsc2hs
complains about not finding the cudart.dylib. Adding rpath
fixes that but led to problems with client builds.
I should have noticed it earlier. Now the package seamlessly installs on Windows and all would be well… if not the crashes. I did my tests on 32-bit GHC distributions and it worked. However, on 64-bit GHC distributions any program using cuda
will crash as soon as the first ffi call being made.
Interestingly, the programs work fine when executed by runghc
or ghci
. The crash happens only when I generate the .exe
file and run it.
Repro:
I have a file Cuda.hs
:
import Foreign.CUDA.Driver.Device
main = putStr "Help " >> initialise [] >> putStrLn "me"
When calling runhaskell Cuda.hs
I get Help me
, when building ghc Cuda.hs
and calling generated Cuda.exe
I get Help
and then crash occurs.
I am still investigating the issue, it looks as if the ffi calls on x64 were somehow broken. I'll try to reduce the problem. I'll write more soon.
I'm trying to use dynamic parallelism on a GPU with compute capability 3.5 (verified by deviceQuery).
If I run a simple CUDA program without dynamic parallelism, it functions fine but if I uncomment the marked line below it gives an error. I've tested it on two machines with two different GPUs and I get an error on both (though the errors are different).
The Haskell code is:
[... More initialization happens here ...]
cm <- loadFile "cudabits/cached_mult.ptx"
ldpcFun <- getFun cm "ldpc"
(orig_lam_dev, orig_lam_len) <- newListArrayLen $ U.toList $ orig_lam
result_dev <- mallocArray orig_lam_len :: IO (DevicePtr IntT)
launchKernel ldpcFun (1,1,1) (1,1,1) 0 Nothing [VArg mLet, VArg offsets, IArg rowCount, IArg colCount, IArg (fromIntegral maxIterations), VArg orig_lam_dev, IArg (fromIntegral orig_lam_len), VArg result_dev]
sync
result <- peekListArray orig_lam_len result_dev
free result_dev
free orig_lam_dev
free mLet
free offsets
return $! Just $! U.map toBool $! U.fromList result
and the CUDA code is:
extern "C" __global__ void f() {
}
extern "C" __global__ void ldpc(double* mLet, double* offsets, int rowCount, int colCount, int maxIterations, double* orig_lam, int orig_lam_len, int* result) {
f<<<1, 1>>>(); // **** If I comment this line out, it runs fine ****
for (int i = 0; i < orig_lam_len; ++i) {
result[i] = orig_lam[i] > 0;
}
}
I'm compiling the CUDA code with
nvcc cached_mult.cu --compiler-options -std=c++98 -arch=sm_35 -gencode=arch=compute_35,code=sm_35 -rdc=true -lcudadevrt -ptx
It might be worth pointing out that, on one machine, I have an unsupported version of GCC (which is why I have the --compiler-options
flag, to work around that). On the other machine, though, I do have a supported version of GCC (4.8.5).
The machine with the unsupported version of GCC has a GeForce GT 710B which has the GK208 architecture. The error on this machine is CUDA_ERROR_NO_BINARY_FOR_GPU (error 209) due to "no kernel image is available for execution on the device" on CUDA API call to cuModuleLoad.
The other machine (with GCC 4.8.5) has a K80. On this machine, the error is CUDA_ERROR_INVALID_PTX (error 218) due to "a PTX JIT compilation failed" on CUDA API call to cuModuleLoad.
I've also tried adjusting the compilation flags to use the 3.7 capability of the K80, just in case, but it doesn't seem to affect anything.
In both cases, commenting out the line marked in the CUDA code lets it run without error.
Also, if I just add a main
with a simple call to ldpc
to the CUDA code and compile it directly to an executable, that executable seems to run without an error.
Am I missing an option or flag?
Hey, I just ran into a couple of errors while building the code with stack-7.10.yaml
- here's a commit that works for me, with this stack.yaml version: athanclark@8b33fe4
I'm not sure how it could be made backwards compatible though, especially with findProgramOnSearchPath
returning a different type with Cabal 1.22 :\
Right now, trying to build cuda
with GHC 8.8.3 or 8.10.1 fails with:
accelerate-kullback-liebler-0.1.2.0). The failure occurred during the
configure step. The exception was:
dieVerbatim: user error (cabal: '/opt/ghc/bin/ghc-8.8.3' exited with an error:
/home/vanessa/programming/haskell/done/accelerate-kullback-liebler/dist-newstyle/tmp/src-1724/cuda-0.10.1.0/dist/setup/setup.hs:199:29:
error:
• Couldn't match expected type ‘PerCompilerFlavor [String]’
with actual type ‘[(CompilerFlavor, [[Char]])]’
• In the ‘options’ field of a record
In the second argument of ‘($)’, namely
‘emptyBuildInfo
{ccOptions = ccOptions', ldOptions = ldOptions',
extraLibs = extraLibs', extraGHCiLibs = extraGHCiLibs',
extraLibDirs = extraLibDirs', frameworks = frameworks',
extraFrameworkDirs = frameworkDirs',
options = [(GHC, ghcOptions) | os /= Windows],
customFieldsBI = [c2hsExtraOptions]}’
In a stmt of a 'do' block:
buildInfo' <- addSystemSpecificOptions
$ emptyBuildInfo
{ccOptions = ccOptions', ldOptions = ldOptions',
extraLibs = extraLibs', extraGHCiLibs = extraGHCiLibs',
extraLibDirs = extraLibDirs', frameworks = frameworks',
extraFrameworkDirs = frameworkDirs',
options = [(GHC, ghcOptions) | os /= Windows],
customFieldsBI = [c2hsExtraOptions]}
|
199 | , options = [(GHC, ghcOptions) | os /= Windows]
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
This can be fixed by revising such that Cabal <3.0
.
Cheers!
Hello,
I'm running nVidia 367.27 on Ubuntu 16.04 with CUDA 8.0 RC for my nVidia GTX 1080.
When running accelerate-nofib, I get the following message repeated many times.
*** Warning: Unknown CUDA device compute capability: 6.1
*** Please submit a bug report at https://github.com/tmcdonell/cuda/issues
Looks like this repository just needs updated for the latest Pascal GPUs.
Thanks,
Gordon
mingw/bin
folder on it for the mingw packaged w ghcupcabal init
(.cabal is below)cabal configure
and cabal build
cabal-version: 3.0
name: hs-cuda-test
version: 0.1.0.0
license: GPL-3.0-only
license-file: LICENSE
build-type: Simple
common warnings
ghc-options: -Wall
executable hs-cuda-test
import: warnings
main-is: Main.hs
build-depends:
base ^>=4.17.2.1,
cuda,
hs-source-dirs: app
default-language: Haskell2010
[41 of 44] Compiling Foreign.CUDA.Driver.Graph.Build ( dist\build\Foreign\CUDA\Driver\Graph\Build.hs, dist\build\Foreign\CUDA\Driver\Graph\Build.o )
src\Foreign\CUDA\Driver\Graph\Build.chs:183:48: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `cuGraphAddChildGraphNode'_', namely
`a3'2'
In the first argument of `(>>=)', namely
cuGraphAddChildGraphNode'_ a1' a2' a3'1 a3'2 a4'
In the expression:
cuGraphAddChildGraphNode'_ a1' a2' a3'1 a3'2 a4'
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
183 | , withNodeArrayLen* `[Node]'&
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:209:46: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `cuGraphAddDependencies'_', namely
`a3'2'
In the first argument of `(>>=)', namely
`cuGraphAddDependencies'_ a1' a2' a3'1 a3'2'
In the expression:
cuGraphAddDependencies'_ a1' a2' a3'1 a3'2
>>= \ res -> checkStatus res >> return ()
|
209 | , withNodeArray* `[Node]'
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:235:49: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `cuGraphRemoveDependencies'_', namely
`a3'2'
In the first argument of `(>>=)', namely
`cuGraphRemoveDependencies'_ a1' a2' a3'1 a3'2'
In the expression:
cuGraphRemoveDependencies'_ a1' a2' a3'1 a3'2
>>= \ res -> checkStatus res >> return ()
|
235 | , withNodeArray* `[Node]'
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:256:28: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `addEmpty'_', namely `a3'2'
In the first argument of `(>>=)', namely
`addEmpty'_ a1' a2' a3'1 a3'2'
In the expression:
addEmpty'_ a1' a2' a3'1 a3'2
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
256 | { alloca- `Node' peekNode*
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:280:27: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `addHost'_', namely `a3'2'
In the first argument of `(>>=)', namely
addHost'_ a1' a2' a3'1 a3'2 a4' a5'
In the expression:
addHost'_ a1' a2' a3'1 a3'2 a4' a5'
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
280 | , withNodeArrayLen* `[Node]'&
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:341:51: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `cuGraphAddKernelNode_simple'_', namely
`a3'2'
In the first argument of `(>>=)', namely
cuGraphAddKernelNode_simple'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10' a11' a12'
In the expression:
cuGraphAddKernelNode_simple'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10' a11' a12'
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
341 | , `Int'
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:400:29: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `addMemcpy'_', namely `a3'2'
In the first argument of `(>>=)', namely
addMemcpy'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10' a11' a12' a13' a14'
a15' a16' a17' a18' a19' a20' a21' a22' a23'
In the expression:
addMemcpy'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10' a11' a12' a13' a14'
a15' a16' a17' a18' a19' a20' a21' a22' a23'
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
400 | => Graph
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:433:51: error:
* Couldn't match expected type `CULLong' with actual type `CULong'
* In the fourth argument of `cuGraphAddMemsetNode_simple'_', namely
`a3'2'
In the first argument of `(>>=)', namely
cuGraphAddMemsetNode_simple'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10'
In the expression:
cuGraphAddMemsetNode_simple'_
a1' a2' a3'1 a3'2 a4' a5' a6' a7' a8' a9' a10'
>>=
\ res
-> checkStatus res >> peekNode a1' >>= \ a1'' -> return (a1'')
|
433 | , `Word32'
| ^^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:513:37: error:
* Couldn't match type `CULong' with `CULLong'
Expected: Ptr CULLong
Actual: Ptr CULong
* In the fourth argument of `cuGraphGetEdges'_', namely a4'
In the first argument of `(>>=)', namely
cuGraphGetEdges'_ a1' a2' a3' a4'
In the expression:
cuGraphGetEdges'_ a1' a2' a3' a4'
>>= \ res -> checkStatus res >> return ()
|
513 | , castPtr `Ptr Node'
| ^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:543:33: error:
* Couldn't match type `CULong' with `CULLong'
Expected: Ptr CULLong
Actual: Ptr CULong
* In the third argument of `cuGraphGetNodes'_', namely a3'
In the first argument of `(>>=)', namely
cuGraphGetNodes'_ a1' a2' a3'
In the expression:
cuGraphGetNodes'_ a1' a2' a3'
>>= \ res -> checkStatus res >> return ()
|
543 | , castPtr `Ptr Node'
| ^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:573:37: error:
* Couldn't match type `CULong' with `CULLong'
Expected: Ptr CULLong
Actual: Ptr CULong
* In the third argument of `cuGraphGetRootNodes'_', namely a3'
In the first argument of `(>>=)', namely
cuGraphGetRootNodes'_ a1' a2' a3'
In the expression:
cuGraphGetRootNodes'_ a1' a2' a3'
>>= \ res -> checkStatus res >> return ()
|
573 | , castPtr `Ptr Node'
| ^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:603:44: error:
* Couldn't match type `CULong' with `CULLong'
Expected: Ptr CULLong
Actual: Ptr CULong
* In the third argument of `cuGraphNodeGetDependencies'_', namely
a3'
In the first argument of `(>>=)', namely
cuGraphNodeGetDependencies'_ a1' a2' a3'
In the expression:
cuGraphNodeGetDependencies'_ a1' a2' a3'
>>= \ res -> checkStatus res >> return ()
|
603 | , castPtr `Ptr Node'
| ^^^
src\Foreign\CUDA\Driver\Graph\Build.chs:633:46: error:
* Couldn't match type `CULong' with `CULLong'
Expected: Ptr CULLong
Actual: Ptr CULong
* In the third argument of `cuGraphNodeGetDependentNodes'_', namely
a3'
In the first argument of `(>>=)', namely
cuGraphNodeGetDependentNodes'_ a1' a2' a3'
In the expression:
cuGraphNodeGetDependentNodes'_ a1' a2' a3'
>>= \ res -> checkStatus res >> return ()
|
633 | , castPtr `Ptr Node'
| ^^^
[42 of 44] Compiling Foreign.CUDA.Driver ( src\Foreign\CUDA\Driver.hs, dist\build\Foreign\CUDA\Driver.o )
[43 of 44] Compiling Foreign.CUDA.Analysis.Occupancy ( src\Foreign\CUDA\Analysis\Occupancy.hs, dist\build\Foreign\CUDA\Analysis\Occupancy.o )
[44 of 44] Compiling Foreign.CUDA.Analysis ( src\Foreign\CUDA\Analysis.hs, dist\build\Foreign\CUDA\Analysis.o )
Error: cabal-3.10.2.1.exe: Failed to build cuda-0.11.0.1 (which is required by
exe:hs-cuda-test from hs-cuda-test-0.1.0.0). See the build log above for
details.
I encountered this error while compiling accelerate-cuda
, which depends on cuda
.
c2hs: Errors during expansion of binding hooks:
./Foreign/CUDA/Driver/Marshal.chs:167: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuMemHostRegister' in the header file.
cabal: Error: some packages failed to install:
...
When building with cabal or Stack on Windows 10 with GHC 8.2.1, stack ghci
and cabal repl
both yield:
ghc.EXE: | C:\Users\Travis\sources\cuda\.stack-work\dist\e53504d9\build\cbits\stubs.o: unknown symbol `cudaConfigureCall'
linking extra libraries/objects failed
Using the generated cabal.buildinfo.generated
, which looks like this on my machine:
buildable: True
cc-options: "-DCUDA_INSTALL_PATH=\"C:\\\\Program Files\\\\NVIDIA GPU Computing Toolkit\\\\CUDA\\\\v8.0\""
"-DCUDA_LIBRARY_PATH=\"C:\\\\Program Files\\\\NVIDIA GPU Computing Toolkit\\\\CUDA\\\\v8.0\\\\lib/x64\""
"-IC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\include"
ld-options: "-LC:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\lib/x64"
extra-libraries:
cudart
cuda
extra-ghci-libraries: cudart64_80
nvcuda
extra-lib-dirs: "C:\\Program Files\\NVIDIA GPU Computing Toolkit\\CUDA\\v8.0\\lib/x64"
x-extra-c2hs-options: --cppopts=-E --cppopts=-m64 --cppopts=-DUSE_EMPTY_CASE
Strangely, simply copying cuda.buildinfo.generated
to cuda.buildinfo
, without rebuilding or reconfiguring, allows stack repl
to work. I haven't managed to cause a similar effect with any combination of cabal repl
, cabal configure
, or cabal clean
.
stack ghci
doesn't seem to work by calling Setup.hs
; I verified this by simply adding a putStr
before defaultMainWithHooks
. I also added prints to getHookedBuildInfo
, so I have no idea by what code path Stack is able to read the cuda.buildinfo
file (and fail to read the cuda.buildinfo.generated
file!).
In practice I doubt anyone is using this library alone in GHCi, but this behavior is very strange and might be some Cabal and/or Stack bug. I'm curious if anyone can manage to reproduce this.
The log:
% cabal install cuda
Resolving dependencies...
Configuring cuda-0.6.5.1...
Building cuda-0.6.5.1...
Failed to install cuda-0.6.5.1
Build log ( /Users/yongqli/.cabal/logs/cuda-0.6.5.1.log ):
[1 of 1] Compiling Main ( /var/folders/5z/8xg1w3px7l761h7mz9xbjqy80000gp/T/cuda-0.6.5.1-5418/cuda-0.6.5.1/dist/setup/setup.hs, /var/folders/5z/8xg1w3px7l761h7mz9xbjqy80000gp/T/cuda-0.6.5.1-5418/cuda-0.6.5.1/dist/setup/Main.o )
Linking /var/folders/5z/8xg1w3px7l761h7mz9xbjqy80000gp/T/cuda-0.6.5.1-5418/cuda-0.6.5.1/dist/setup/setup ...
Configuring cuda-0.6.5.1...
checking for gcc... /usr/bin/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /usr/bin/gcc accepts -g... yes
checking for /usr/bin/gcc option to accept ISO C89... none needed
checking build system type... x86_64-apple-darwin14.0.0
checking host system type... x86_64-apple-darwin14.0.0
checking target system type... x86_64-apple-darwin14.0.0
checking for nvcc... /usr/local/cuda/bin/nvcc
checking ghc architecture... x86_64
checking for Apple Blocks extension... yes
checking how to run the C preprocessor... /usr/bin/gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking cuda.h usability... yes
checking cuda.h presence... yes
checking for cuda.h... yes
checking cuda_runtime_api.h usability... yes
checking cuda_runtime_api.h presence... yes
checking for cuda_runtime_api.h... yes
checking for library containing cuDriverGetVersion... -lcuda
checking for library containing cudaRuntimeGetVersion... -lcudart
configure: creating ./config.status
config.status: creating cuda.buildinfo
Building cuda-0.6.5.1...
Preprocessing library cuda-0.6.5.1...
dyld: Library not loaded: @rpath/libcudart.6.5.dylib
Referenced from: /private/var/folders/5z/8xg1w3px7l761h7mz9xbjqy80000gp/T/cuda-0.6.5.1-5418/cuda-0.6.5.1/dist/build/Foreign/CUDA/Internal/Offsets_hsc_make
Reason: image not found
running dist/build/Foreign/CUDA/Internal/Offsets_hsc_make failed (exit code -5)
command was: dist/build/Foreign/CUDA/Internal/Offsets_hsc_make >dist/build/Foreign/CUDA/Internal/Offsets.hs
cabal: Error: some packages failed to install:
cuda-0.6.5.1 failed during the building phase. The exception was:
ExitFailure 1
Probably a request rather than a problem, but it does affect my using the Runtime API.
Hello,
I have a Mac with OS X 10.9.2 (Mavericks) with CUDA Driver Version: 5.5.47 and GPU Driver Version: 8.24.9 130.40.25f01. During installation of the cuda package, I receive the following message.
Foreign/CUDA/Driver/Utils.chs:35:23:
Illegal type signature: `IO (Status, Int) cuDriverGetVersion'
Perhaps you intended to use -XScopedTypeVariables
In a pattern type-signature
cabal: Error: some packages failed to install:
cuda-0.5.1.2 failed during the building phase. The exception was:
ExitFailure 1
This occurs for both cuda-0.5.1.1 (from Hackage) and cuda-0.5.1.2 (master from Github).
Hi Trev,
Just trying to build cuda under GHC 7.8.3 on Debian linux and getting what I think are strange errors.
This is the verbose output.. Why can't it find those modules. The .chs files are there.
cabal install -v
Reading available packages...
Choosing modular solver.
Resolving dependencies...
Ready to install cuda-0.6.5.0
Waiting for install task to finish...
creating dist/setup
./dist/setup/setup configure --verbose=2 --ghc --prefix=/home/neil/.cabal
--bindir=/home/neil/.cabal/bin --libdir=/home/neil/.cabal/lib
--libsubdir=x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
--libexecdir=/home/neil/.cabal/libexec --datadir=/home/neil/.cabal/share
--datasubdir=x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
--docdir=/home/neil/.cabal/share/doc/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
--htmldir=/home/neil/.cabal/share/doc/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0/html
--haddockdir=/home/neil/.cabal/share/doc/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0/html
--sysconfdir=/home/neil/.cabal/etc --enable-library-profiling --user
--constraint=bytestring ==0.10.4.0 --constraint=base ==4.7.0.1 --disable-tests
--disable-benchmarks
Configuring cuda-0.6.5.0...
Dependency base ==4.7.0.1: using base-4.7.0.1
Dependency bytestring ==0.10.4.0: using bytestring-0.10.4.0
/usr/bin/ghc --info
Using Cabal-1.18.1.3 compiled by ghc-7.8
Using compiler: ghc-7.8.3
Using install prefix: /home/neil/.cabal
Binaries installed in: /home/neil/.cabal/bin
Libraries installed in:
/home/neil/.cabal/lib/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
Private binaries installed in: /home/neil/.cabal/libexec
Data files installed in:
/home/neil/.cabal/share/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
Documentation installed in:
/home/neil/.cabal/share/doc/x86_64-linux-ghc-7.8.3/cuda-0.6.5.0
Configuration files installed in: /home/neil/.cabal/etc
Using alex version 3.1.3 found on system at: /home/neil/.cabal/bin/alex
Using ar found on system at: /usr/bin/ar
Using c2hs version 0.17.2 found on system at: /home/neil/.cabal/bin/c2hs
Using cpphs version 1.18.5 found on system at: /home/neil/.cabal/bin/cpphs
No ffihugs found
Using gcc version 4.7 found on system at: /usr/bin/gcc
Using ghc version 7.8.3 found on system at: /usr/bin/ghc
Using ghc-pkg version 7.8.3 found on system at: /usr/bin/ghc-pkg
No greencard found
Using haddock version 2.13.2 found on system at: /home/neil/.cabal/bin/haddock
Using happy version 1.19.4 found on system at: /home/neil/.cabal/bin/happy
No hmake found
Using hpc version 0.67 found on system at: /usr/bin/hpc
Using hsc2hs version 0.67 found on system at: /usr/bin/hsc2hs
Using hscolour version 1.20 found on system at: /home/neil/.cabal/bin/HsColour
No hugs found
No jhc found
Using ld found on system at: /usr/bin/ld
No lhc found
No lhc-pkg found
No nhc98 found
Using pkg-config version 0.26 found on system at: /usr/bin/pkg-config
Using ranlib found on system at: /usr/bin/ranlib
Using strip found on system at: /usr/bin/strip
Using tar found on system at: /bin/tar
No uhc found
sh configure --with-compiler=ghc --prefix=/home/neil/.cabal --bindir=/home/neil/.cabal/bin --libdir=/home/neil/.cabal/lib --libexecdir=/home/neil/.cabal/libexec --datadir=/home/neil/.cabal/share --sysconfdir=/home/neil/.cabal/etc
checking for gcc... /usr/bin/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /usr/bin/gcc accepts -g... yes
checking for /usr/bin/gcc option to accept ISO C89... none needed
checking build system type... x86_64-unknown-linux-gnu
checking host system type... x86_64-unknown-linux-gnu
checking target system type... x86_64-unknown-linux-gnu
checking for nvcc... /usr/bin/nvcc
checking ghc architecture... x86_64
checking for Apple Blocks extension... no
checking how to run the C preprocessor... /usr/bin/gcc -E
checking for grep that handles long lines and -e... /bin/grep
checking for egrep... /bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking cuda.h usability... yes
checking cuda.h presence... yes
checking for cuda.h... yes
checking cuda_runtime_api.h usability... yes
checking cuda_runtime_api.h presence... yes
checking for cuda_runtime_api.h... yes
checking for library containing cuDriverGetVersion... -lcuda
checking for library containing cudaRuntimeGetVersion... -lcudart
configure: creating ./config.status
config.status: creating cuda.buildinfo
Reading parameters from ./cuda.buildinfo
creating dist/setup
./dist/setup/setup build --verbose=2
Reading parameters from ./cuda.buildinfo
Component build order: library
creating dist/build
creating dist/build/autogen
Building cuda-0.6.5.0...
Preprocessing library cuda-0.6.5.0...
Building library...
/usr/bin/ghc --info
/usr/bin/ghc --info
creating dist/build
/usr/bin/ghc --make -fbuilding-cabal-package -O -static -dynamic-too -dynosuf dyn_o -dynhisuf dyn_hi -outputdir dist/build -odir dist/build -hidir dist/build -stubdir dist/build -i -idist/build -i. -idist/build/autogen -Idist/build/autogen -Idist/build -I. -optP-include -optPdist/build/autogen/cabal_macros.h -package-name cuda-0.6.5.0 -hide-all-packages -package-db dist/package.conf.inplace -package-id base-4.7.0.1-1a55ebc8256b39ccbff004d48b3eb834 -package-id bytestring-0.10.4.0-aeb2ba35f192516ed4298f0656cc3a85 -XHaskell98 Foreign.CUDA Foreign.CUDA.Ptr Foreign.CUDA.Analysis Foreign.CUDA.Analysis.Device Foreign.CUDA.Analysis.Occupancy Foreign.CUDA.Runtime Foreign.CUDA.Runtime.Device Foreign.CUDA.Runtime.Error Foreign.CUDA.Runtime.Event Foreign.CUDA.Runtime.Exec Foreign.CUDA.Runtime.Marshal Foreign.CUDA.Runtime.Stream Foreign.CUDA.Runtime.Texture Foreign.CUDA.Runtime.Utils Foreign.CUDA.Driver Foreign.CUDA.Driver.Context Foreign.CUDA.Driver.Device Foreign.CUDA.Driver.Error Foreign.CUDA.Driver.Event Foreign.CUDA.Driver.Exec Foreign.CUDA.Driver.Marshal Foreign.CUDA.Driver.Module Foreign.CUDA.Driver.Stream Foreign.CUDA.Driver.Texture Foreign.CUDA.Driver.Utils Foreign.CUDA.Internal.C2HS Foreign.CUDA.Internal.Offsets -optc-I/usr/include -optl-L/usr/lib64 -Wall -O2 -funbox-strict-fields -fwarn-tabs
:
Could not find module ‘Foreign.CUDA.Analysis.Device’
Use -v to see a list of the files searched for.
:
Could not find module ‘Foreign.CUDA.Runtime.Device’
Use -v to see a list of the files searched for.
Failed to install cuda-0.6.5.0
cabal: Error: some packages failed to install:
cuda-0.6.5.0 failed during the building phase. The exception was:
ExitFailure 1
Just a heads up,
Tried to install the cuda package on Ubuntu 18.04 ahead of it's release. If you leave it to install with no CUDA_PATH set, it finds nvcc at /usr/bin/nvcc, then goes up a couple levels and can't find /usr/lib64. If I set CUDA_PATH to /usr/lib/cuda I get:
checking for environment variable CUDA_PATH
Path rejected: /usr/lib/cuda
Does not exist: /usr/lib/cuda/include/cuda.h
checking for nvcc compiler executable in PATH
Path accepted: /usr
Found CUDA toolkit at: /usr
CallStack (from HasCallStack):
die', called at /tmp/stack770/cuda-0.9.0.3/Setup.hs:149:26 in main:Main
setup: Could not find path: ["/usr/lib64"]
(include and lib64 are empty, copied to /usr/include/ and /usr/lib/x86_64-linux-gnu/ instead)
Was the first time I installed 'nvidia-cuda-toolkit' from apt-get in a long time due to its lag in release (right now its current at 9.1), not sure if this structure changed just in 18.04.
I did try creating /usr/lib64 for fun, cuda compiled, but accelerate-llvm-ptx died not being able to find the nvvm directory and libdevice (nvvm/libdevice being located in /usr/lib/cuda), also strange because I thought nvvm was default off?
I tried to run accelerate library to run with CUDA, but the following error happened.
src/Foreign/CUDA/Driver/Context/Primary.chs:139: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuDevicePrimaryCtxRelease' in the header file.
src/Foreign/CUDA/Driver/Context/Primary.chs:114: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuDevicePrimaryCtxReset' in the header file.
src/Foreign/CUDA/Driver/Context/Primary.chs:91: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuDevicePrimaryCtxSetFlags' in the header file.
It seems like there is a problem with CUDA-11.3, could you introduce the compatibility? I cannot downgrade my cuda as that would break other dependencies.
`[1 of 1] Compiling Main ( examples/src/deviceQueryDrv/DeviceQuery.hs, dist/build/nvidia-device-query/nvidia-device-query-tmp/Main.o )
examples/src/deviceQueryDrv/DeviceQuery.hs:85:9:
Not in scope: data constructor ‘Exclusive’
cabal: Leaving directory '/tmp/cabal-tmp-4620/cuda-0.7.5.0'
World file is already up to date.
cabal: Error: some packages failed to install:
cuda-0.7.5.0 failed during the building phase. The exception was:
ExitFailure 1
`
i can't install package
Hi,
I cannot install cuda via stack:
[09:02:44 schnecki grenade]$ stack build cuda
cuda> configure
cuda> [1 of 2] Compiling Main ( /tmp/stack-54c629f074650108/cuda-0.10.2.0/Setup.hs, /tmp/stack-54c629f074650108/cuda-0.10.2.0/.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0/setup/Main.o )
cuda> [2 of 2] Compiling StackSetupShim ( /home/schnecki/.stack/setup-exe-src/setup-shim-mPHDZzAJ.hs, /tmp/stack-54c629f074650108/cuda-0.10.2.0/.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0/setup/StackSetupShim.o )
cuda> Linking /tmp/stack-54c629f074650108/cuda-0.10.2.0/.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0/setup/setup ...
cuda> Configuring cuda-0.10.2.0...
cuda> Found CUDA toolkit at: /opt/cuda
cuda> Storing parameters to cuda.buildinfo.generated
cuda> Using build information from 'cuda.buildinfo.generated'.
cuda> Provide a 'cuda.buildinfo' file to override this behaviour.
cuda> build
cuda> Using build information from 'cuda.buildinfo.generated'.
cuda> Provide a 'cuda.buildinfo' file to override this behaviour.
cuda> Preprocessing library for cuda-0.10.2.0..
cuda> c2hs: Errors during expansion of binding hooks:
cuda>
cuda> src/Foreign/CUDA/Driver/Context/Primary.chs:139: (column 15) [ERROR] >>> Unknown identifier!
cuda> Cannot find a definition for `cuDevicePrimaryCtxRelease' in the header file.
cuda> src/Foreign/CUDA/Driver/Context/Primary.chs:114: (column 15) [ERROR] >>> Unknown identifier!
cuda> Cannot find a definition for `cuDevicePrimaryCtxReset' in the header file.
cuda> src/Foreign/CUDA/Driver/Context/Primary.chs:91: (column 15) [ERROR] >>> Unknown identifier!
cuda> Cannot find a definition for `cuDevicePrimaryCtxSetFlags' in the header file.
cuda>
-- While building package cuda-0.10.2.0 (scroll up to its section to see the error) using:
/tmp/stack-54c629f074650108/cuda-0.10.2.0/.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0/setup/setup --builddir=.stack-work/dist/x86_64-linux-tinfo6/Cabal-3.0.1.0 build --ghc-options " -fdiagnostics-color=always"
Process exited with code: ExitFailure 1
As c2hs is erroring, I suppose there is an error in the code. Or is this a issue with my CUDA installation? I installed cuda and could compile the examples. The result of the deviceQuery is:
[09:08:19 schnecki ~]$ /opt/cuda/samples/1_Utilities/deviceQuery/deviceQuery
/opt/cuda/samples/1_Utilities/deviceQuery/deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 1050 Ti"
CUDA Driver Version / Runtime Version 11.1 / 10.1
CUDA Capability Major/Minor version number: 6.1
Total amount of global memory: 4040 MBytes (4236312576 bytes)
( 6) Multiprocessors, (128) CUDA Cores/MP: 768 CUDA Cores
GPU Max Clock rate: 1620 MHz (1.62 GHz)
Memory Clock rate: 3504 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 1048576 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 2 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: Yes
Supports Cooperative Kernel Launch: Yes
Supports MultiDevice Co-op Kernel Launch: Yes
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 11.1, CUDA Runtime Version = 10.1, NumDevs = 1
Result = PASS
Edit: I am on ArchLinux. I saw some earlier issue (#57). Building from github did not work for me. I guess there were more breaking changes?
Some deprecated functions (e.g. cuCtxAttach) were removed in the most recent NVidia libcuda binary. This library fails to build as a result.
Hi,
when compiling a program using cuda, I'm getting linking errors undefined reference to
strnlen'`. They are caused by the references to the function in https://github.com/tmcdonell/cuda/blob/master/Foreign/CUDA/Driver/Module.chs
The strnlen
function is not a part of C language standard. While it is provided by MS Visual Studio runtime and many of the MinGW distributions, it is NOT a part of MinGW distributed along with GHC binaries that has to be used to link the programs.
I have managed to work around the problem by replacing the strnlen
with strlen
in the Module.chs — it seems to work, however I'm not sure if this is the right thing to do (are the non-terminated string actually possible there?).
Compilation of profiling object files fails:
...
[26 of 27] Compiling Foreign.CUDA.Runtime ( Foreign/CUDA/Runtime.hs, dist/build/Foreign/CUDA/Runtime.o )
[27 of 27] Compiling Foreign.CUDA ( Foreign/CUDA.hs, dist/build/Foreign/CUDA.o )
/usr/local/bin/ghc-7.8.2 --make -fbuilding-cabal-package -O -prof -osuf p_o -hisuf p_hi -outputdir dist/build -odir dist/build -hidir dist/build -stubdir dist/build -i -idist/build -i. -idist/build/autogen -Idist/build/autogen -Idist/build -I. -optP-include -optPdist/build/autogen/cabal_macros.h -package-name cuda-0.6.0.0 -hide-all-packages -package-db dist/package.conf.inplace -package-id base-4.7.0.0-018311399e3b6350d5be3a16b144df9b -package-id bytestring-0.10.4.0-7de5230c6d895786641a4de344336838 -XHaskell98 Foreign.CUDA Foreign.CUDA.Ptr Foreign.CUDA.Analysis Foreign.CUDA.Analysis.Device Foreign.CUDA.Analysis.Occupancy Foreign.CUDA.Runtime Foreign.CUDA.Runtime.Device Foreign.CUDA.Runtime.Error Foreign.CUDA.Runtime.Event Foreign.CUDA.Runtime.Exec Foreign.CUDA.Runtime.Marshal Foreign.CUDA.Runtime.Stream Foreign.CUDA.Runtime.Texture Foreign.CUDA.Runtime.Utils Foreign.CUDA.Driver Foreign.CUDA.Driver.Context Foreign.CUDA.Driver.Device Foreign.CUDA.Driver.Error Foreign.CUDA.Driver.Event Foreign.CUDA.Driver.Exec Foreign.CUDA.Driver.Marshal Foreign.CUDA.Driver.Module Foreign.CUDA.Driver.Stream Foreign.CUDA.Driver.Texture Foreign.CUDA.Driver.Utils Foreign.CUDA.Internal.C2HS Foreign.CUDA.Internal.Offsets -optc-I/usr/local/cuda-6.0/include -optl-L/usr/local/cuda-6.0/lib64 -optl-lcudart -optl-lcuda -Wall -O2 -funbox-strict-fields -fwarn-tabs -fprof-auto -fprof-cafs
[ 1 of 27] Compiling Foreign.CUDA.Internal.Offsets ( dist/build/Foreign/CUDA/Internal/Offsets.hs, dist/build/Foreign/CUDA/Internal/Offsets.p_o )
<no location info>:
Warning: Couldn't figure out linker information!
Make sure you're using GNU gcc, or clang
/usr/bin/ld: cannot find -lcudart
/usr/bin/ld: cannot find -lcuda
collect2: ld gab 1 als Ende-Status zurück
Failed to install cuda-0.6.0.0
cabal: Error: some packages failed to install:
accelerate-cuda-0.15.0.0 depends on cuda-0.6.0.0 which failed to install.
...
Maybe related to #13.
I have ghc-7.8.2, Ubuntu 12.04 and cuda-6.0.
I'm not sure if this is a bug, but I'm packaging an application that depends on this library. The library packages completely fine,
the the software I am trying to build doesn't. I get the following error:
ERROR: RPATH "/usr/local/cuda-11.4/targets/x86_64-linux/lib" on /home/abuild/rpmbuild/BUILDROOT/configd-rest-0.4.3.0-dlh.1.1.x86_64/usr/lib64/ghc-8.10.7/configd-rest-0.4.3.0/libHSconfigd-rest-0.4.3.0-HglcCoQAwNE7w6rVOYPnj-ghc8.10.7.so is not allowed
Now this makes a lot of sense. I cannot guarantee that a user of my software has the cuda sdk installed, but I can guarantee that they have the cuda library that comes with the driver installed (/usr/lib[64]/libcuda.so.1
).
My question is, do we want/need this rpath when compiling software that depends on this library? or could it link against /usr/lib[64]/libcuda.so.1
?
All the best
James
setup:
Ubuntu Linux 13.04
CUDA version 5.0
Driver Version 304.116
hs cuda version cuda-0.5.1.1
Taste of error messages:
undefined reference to cuCtxAttach' undefined reference to
cuCtxDetach'
...
OS X: 10.9.2
CUDA Driver: 6.0.3.7
GPU Driver: 8.24.9 310.40.25f01
Attempting install of cuda-0.5.1.1 from Hackage.
andrews-mbp:src andrew$ cabal install cuda
Resolving dependencies...
Configuring cuda-0.5.1.1...
Building cuda-0.5.1.1...
Failed to install cuda-0.5.1.1
Last 10 lines of the build log ( /Users/andrew/.cabal/logs/cuda-0.5.1.1.log ):
No explicit method or default declaration for `toEnum'
In the instance declaration for `Enum InitFlag'
Foreign/CUDA/Driver/Device.chs:115:10: Warning:
No explicit method or default declaration for `fromEnum'
In the instance declaration for `Enum InitFlag'
[16 of 27] Compiling Foreign.CUDA.Driver.Context ( dist/build/Foreign/CUDA/Driver/Context.hs, dist/build/Foreign/CUDA/Driver/Context.o )
Foreign/CUDA/Driver/Context.chs:93:16:
The deprecation for `BlockingSync' lacks an accompanying binding
cabal: Error: some packages failed to install:
cuda-0.5.1.1 failed during the building phase. The exception was:
ExitFailure 1
andrews-mbp:src andrew$
Attempting install cuda-0.5.1.2 from GitHub
andrews-mbp:cuda andrew$ cabal install
Resolving dependencies...
Configuring cuda-0.5.1.2...
Building cuda-0.5.1.2...
Failed to install cuda-0.5.1.2
Last 10 lines of the build log ( /Users/andrew/.cabal/logs/cuda-0.5.1.2.log ):
No explicit method or default declaration for `toEnum'
In the instance declaration for `Enum InitFlag'
Foreign/CUDA/Driver/Device.chs:115:10: Warning:
No explicit method or default declaration for `fromEnum'
In the instance declaration for `Enum InitFlag'
[16 of 27] Compiling Foreign.CUDA.Driver.Context ( dist/build/Foreign/CUDA/Driver/Context.hs, dist/build/Foreign/CUDA/Driver/Context.o )
Foreign/CUDA/Driver/Context.chs:93:16:
The deprecation for `BlockingSync' lacks an accompanying binding
cabal: Error: some packages failed to install:
cuda-0.5.1.2 failed during the building phase. The exception was:
ExitFailure 1
andrews-mbp:cuda andrew$
tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:46:28:
Ambiguous occurrence ‘die’
It could refer to either ‘Distribution.Simple.Utils.die’,
imported from ‘Distribution.Simple.Utils’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:6:1-32
or ‘System.Exit.die’,
imported from ‘System.Exit’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:15:1-18
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:58:19:
Ambiguous occurrence ‘die’
It could refer to either ‘Distribution.Simple.Utils.die’,
imported from ‘Distribution.Simple.Utils’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:6:1-32
or ‘System.Exit.die’,
imported from ‘System.Exit’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:15:1-18
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:94:36:
Ambiguous occurrence ‘die’
It could refer to either ‘Distribution.Simple.Utils.die’,
imported from ‘Distribution.Simple.Utils’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:6:1-32
or ‘System.Exit.die’,
imported from ‘System.Exit’ at
/tmp/cuda-0.6.6.0-11616/cuda-0.6.6.0/dist/dist-sandbox-286213a2/setup/setup.hs:15:1-18
)
When I have c2hs
in version 0.26.2 (latest released) I get the following error when building cuda
:
Building cuda-0.6.7.0...
Preprocessing library cuda-0.6.7.0...
c2hs: Errors during expansion of binding hooks:
dist/build/Foreign/CUDA/Runtime/Device.chs.h:6: (column 1) [ERROR] >>> Illegal use of incomplete type!
Expected a fully defined structure or union tag; instead found incomplete type.
Everything works fine after I downgraded it to 0.26.1. I guess this might be the reason for recent Travis failures.
running cabal install cuda
results in
Starting cuda-0.10.0.0
Building cuda-0.10.0.0
Failed to install cuda-0.10.0.0
Build log ( /home/noah/.cabal/logs/ghc-8.6.4/cuda-0.10.0.0-FwaIjM5Jwfy3feZT75Vvm4.log ):
cabal: Entering directory '/tmp/cabal-tmp-16690/cuda-0.10.0.0'
[1 of 1] Compiling Main ( /tmp/cabal-tmp-16690/cuda-0.10.0.0/dist/setup/setup.hs, /tmp/cabal-tmp-16690/cuda-0.10.0.0/dist/setup/Main.o )
Linking /tmp/cabal-tmp-16690/cuda-0.10.0.0/dist/setup/setup ...
Configuring cuda-0.10.0.0...
Found CUDA toolkit at: /opt/cuda
Storing parameters to cuda.buildinfo.generated
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Preprocessing library for cuda-0.10.0.0..
c2hs: Errors during expansion of binding hooks:
src/Foreign/CUDA/Driver/Graph/Capture.chs:72: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuStreamBeginCapture' in the header file.
cabal: Leaving directory '/tmp/cabal-tmp-16690/cuda-0.10.0.0'
cabal: Error: some packages failed to install:
cuda-0.10.0.0-FwaIjM5Jwfy3feZT75Vvm4 failed during the building phase. The
exception was:
ExitFailure 1```
For Cuda version 10.1.105-6 using llvm 7.0.1 on Arch Linux.
Using GHC 7.6.3 on 64bit Linux:
$ cabal --extra-include-dirs=/opt/cuda/include --extra-lib-dirs=/opt/cuda/lib --extra-lib-dirs=/usr/lib64/nvidia install cuda
...
[10 of 27] Compiling Foreign.CUDA.Runtime.Exec ( dist/build/Foreign/CUDA/Runtime/Exec.hs, dist/build/Foreign/CUDA/Runtime/Exec.o )
Foreign/CUDA/Runtime/Exec.chs:268:21:
Couldn't match type `a' with `CChar'
`a' is a rigid type variable bound by
the type signature for withFun :: Fun -> (Ptr a -> IO b) -> IO b
at Foreign/CUDA/Runtime/Exec.chs:267:12
Expected type: Fun -> (Ptr a -> IO b) -> IO b
Actual type: String -> (CString -> IO b) -> IO b
In the expression: withCString
In an equation for `withFun': withFun = withCString
Is this a known problem? Would you like more information from me to debug this issue?
~/c/cuda ❯❯❯ cabal install . master
Resolving dependencies...
Configuring cuda-0.7.0.0...
Building cuda-0.7.0.0...
Failed to install cuda-0.7.0.0
Build log ( /Users/andreabedini/.cabal/logs/cuda-0.7.0.0.log ):
Configuring cuda-0.7.0.0...
Found CUDA toolkit at: /usr/local/cuda
Storing parameters to cuda.buildinfo.generated
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Building cuda-0.7.0.0...
Preprocessing library cuda-0.7.0.0...
c2hs: C header contains errors:
/usr/include/stdlib.h:177: (column 47) [ERROR] >>> Syntax error !
The symbol `__WATCHOS_PROHIBITED' does not fit here.
cabal: Error: some packages failed to install:
cuda-0.7.0.0 failed during the building phase. The exception was:
ExitFailure 1
~/c/cuda ❯❯❯ /usr/local/cuda/bin/nvcc --version ⏎ master
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2015 NVIDIA Corporation
Built on Thu_Sep_24_00:26:39_CDT_2015
Cuda compilation tools, release 7.5, V7.5.19
Getting this warning running a gtx 980, got it running with the nvidia-346 drivers
I tried setting DYLD_LIBRARY_PATH
and DYLD_FALLBACK_LIBRARY_PATH
like suggested in other (older) CUDA projects, however, it does not work.
I installed CUDA via homebrew (cask) and also tried with the newest drivers and toolkit directly from NVIDIA (v7.5). Any ideas why the library is not getting loaded? It's definitely there…
Resolving dependencies...
Configuring cuda-0.6.6.2...
Building cuda-0.6.6.2...
Failed to install cuda-0.6.6.2
Build log ( /Users/tamasgal/.cabal/logs/cuda-0.6.6.2.log ):
[1 of 1] Compiling Main ( /var/folders/n2/fpv312vd5xg91ncw_s5f1m5m0000gn/T/cuda-0.6.6.2-1871/cuda-0.6.6.2/dist/setup/setup.hs, /var/folders/n2/fpv312vd5xg91ncw_s5f1m5m0000gn/T/cuda-0.6.6.2-1871/cuda-0.6.6.2/dist/setup/Main.o )
Linking /var/folders/n2/fpv312vd5xg91ncw_s5f1m5m0000gn/T/cuda-0.6.6.2-1871/cuda-0.6.6.2/dist/setup/setup ...
Configuring cuda-0.6.6.2...
checking for gcc... /usr/bin/gcc
checking whether the C compiler works... yes
checking for C compiler default output file name... a.out
checking for suffix of executables...
checking whether we are cross compiling... no
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether /usr/bin/gcc accepts -g... yes
checking for /usr/bin/gcc option to accept ISO C89... none needed
checking build system type... x86_64-apple-darwin15.0.0
checking host system type... x86_64-apple-darwin15.0.0
checking target system type... x86_64-apple-darwin15.0.0
checking for nvcc... /usr/local/cuda/bin/nvcc
checking ghc version... 7.10.1
checking ghc architecture... x86_64
checking for Apple Blocks extension... no
checking how to run the C preprocessor... /usr/bin/gcc -E
checking for grep that handles long lines and -e... /usr/bin/grep
checking for egrep... /usr/bin/grep -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking cuda.h usability... yes
checking cuda.h presence... yes
checking for cuda.h... yes
checking cuda_runtime_api.h usability... yes
checking cuda_runtime_api.h presence... yes
checking for cuda_runtime_api.h... yes
checking for library containing cuDriverGetVersion... -lcuda
checking for library containing cudaRuntimeGetVersion... -lcudart
configure: creating ./config.status
config.status: creating cuda.buildinfo
Building cuda-0.6.6.2...
Preprocessing library cuda-0.6.6.2...
dyld: Library not loaded: @rpath/libcudart.7.0.dylib
Referenced from: /private/var/folders/n2/fpv312vd5xg91ncw_s5f1m5m0000gn/T/cuda-0.6.6.2-1871/cuda-0.6.6.2/dist/build/Foreign/CUDA/Internal/Offsets_hsc_make
Reason: image not found
running dist/build/Foreign/CUDA/Internal/Offsets_hsc_make failed (exit code -5)
command was: dist/build/Foreign/CUDA/Internal/Offsets_hsc_make >dist/build/Foreign/CUDA/Internal/Offsets.hs
cabal: Error: some packages failed to install:
cuda-0.6.6.2 failed during the building phase. The exception was:
ExitFailure 1
Hi, I'm getting the following error when trying to build cuda-0.10.0.0
[2 of 2] Compiling StackSetupShim ( /home/brett/.stack/setup-exe-src/setup-shim-mPHDZzAJ.hs, /tmp/stack27524/cuda-0.10.0.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/StackSetupShim.o )
Linking /tmp/stack27524/cuda-0.10.0.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/setup ...
Configuring cuda-0.10.0.0...
Found CUDA toolkit at: /usr/local/cuda-10.1
Storing parameters to cuda.buildinfo.generated
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Preprocessing library for cuda-0.10.0.0..
c2hs: Errors during expansion of binding hooks:
src/Foreign/CUDA/Driver/Graph/Capture.chs:72: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuStreamBeginCapture' in the header file.
I'm on Ubuntu 18.04, c2hs 0.28.6. I'm pretty new to Haskell, so there's a good chance I'm missing something obvious, but any help would be appreciated as I've been wrestling with it for a few hours. Thanks!
Originally reported by David Duke.
While working with the Haskell Cuda library on OSX 10.11 I started getting a strange set of behaviours, and wondered if you had come anything similar? I recently updated both my GHC installation (to 8.0.1) and my CUDA toolkit (to 7.5). I therefore wanted to update Accelerate etc, but noted that your Cuda package was only noted up to 7.0. As I don't believe there are substantial changes from 7.0 -> 7.5 I thought it should still work (and I need to have the later CUDA for work not involving Haskell).
However I found that Haskell code that called the Cuda library was aborting, and tracked the failure down to the call to cuInit (made through "initialise" in your library) returning error code 2 (CUDA_DEVICE_OUT_OF_MEMORY). Its not clear why this should be happening, and to explore further I:
Given the simplicity of the two programs, I'm scratching my head for possible causes: when called from C, the wrapper is showing the correct arg and result; when called from Haskell it shows the correct arg but the wrong result! Here are the compiler invocations and runtime results (programs are attached):
~> gcc -c -I /usr/local/cuda/include cuwrap.c
~> ghc callFromHs.hs cuwrap.o -L /usr/local/cuda/lib/ -lcuda
~> gcc -o callFromC callFromC.c cuwrap.o -L /usr/local/cuda/lib/ -lcuda
~> ./callFromC
Running main.
cuInit called: arg 0, result 0
Main completed, result 0
~> ./callFromHs
Running Main
cuInit called: arg 0, result 2
Main completed, result 2
I haven't had a chance to regress to ghc-7.10.3, and was also planning to try the code on linux once Cuda is reinstalled next week. Wondered if you had come across anything similar - or could check what happens on a different configuration?
Attachments: https://gist.github.com/tmcdonell/ee7c5183633a3687dafd15023f15a914
Hi Trevor
Trying to bring my pet project up to date im seeing:
cuda > /tmp/stack-19b9ea176767e51e/cuda-0.10.1.0/Setup.hs:199:29: error:
cuda > • Couldn't match expected type ‘PerCompilerFlavor [String]’
cuda > with actual type ‘[(CompilerFlavor, [[Char]])]’
as an aside something else is reporting:
WARNING: Ignoring cuda's bounds on Cabal (>=1.24 && <3); using Cabal-3.0.1.0.
but the cabal version branching around this error is #if MIN_VERSION_Cabal(3,0,0) ?
Full error text:
cuda > /tmp/stack-19b9ea176767e51e/cuda-0.10.1.0/Setup.hs:199:29: error:
cuda > • Couldn't match expected type ‘PerCompilerFlavor [String]’
cuda > with actual type ‘[(CompilerFlavor, [[Char]])]’
cuda > • In the ‘options’ field of a record
cuda > In the second argument of ‘($)’, namely
cuda > ‘emptyBuildInfo
cuda > {ccOptions = ccOptions', ldOptions = ldOptions',
cuda > extraLibs = extraLibs', extraGHCiLibs = extraGHCiLibs',
cuda > extraLibDirs = extraLibDirs', frameworks = frameworks',
cuda > extraFrameworkDirs = frameworkDirs',
cuda > options = [(GHC, ghcOptions) | os /= Windows],
cuda > customFieldsBI = [c2hsExtraOptions]}’
cuda > In a stmt of a 'do' block:
cuda > buildInfo' <- addSystemSpecificOptions
cuda > $ emptyBuildInfo
cuda > {ccOptions = ccOptions', ldOptions = ldOptions',
cuda > extraLibs = extraLibs', extraGHCiLibs = extraGHCiLibs',
cuda > extraLibDirs = extraLibDirs', frameworks = frameworks',
cuda > extraFrameworkDirs = frameworkDirs',
cuda > options = [(GHC, ghcOptions) | os /= Windows],
cuda > customFieldsBI = [c2hsExtraOptions]}
cuda > |
cuda > 199 | , options = [(GHC, ghcOptions) | os /= Windows]
cuda > | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
I've managed to get the code to compile for CUDA 12.2 but I'm not seeing any release/12 branch to do a pull request on.
Hi, I have a Windows-based platform and I'm trying to install your package (cuda-0.6.5.0). I'm using 32-bit GHC 7.8.3 and have CUDA 6.5 toolkit installed.
When I issue cabal install cuda
command, the configure script fails to find cuda library with cuDriverGetVersion
function. It happens even though I have properly prepared CPATH
, LIRARY_PATH
and PATH
environment variables, providing them with the paths to the CUDA 6.5 toolkit. I was even careful enough to make sure that there are no spaces in paths.
The output of cabal install command is provided below:
http://pastebin.com/p44eL3BP
I was able to more detailed info by checking configure log:
http://pastebin.com/MdT9AKrR
As it may be seen, the configure when trying to compile the test program using cuDriverGetVersion
fails to link (the symbol remains unresolved), even though the gcc command clearly contains proper library path location (I made sure that it contains the cuda.lib file). The -lcuda flag works properly, its the lib that does't export the searched function.
I have found that this package was installed sucessfully on windows by owner of this repo: https://github.com/mainland/cuda
However, the workarounds applied there are from March 2013 when the Cuda version was 5.0, while I need to use 6.5.
Were any of that changes merged?
Could you provide any suggestions on how to get your cuda bindings to work on windows?
The jetson board has compute capability 3.2. Maybe add it to the list of knowns?
Hello!
I've installed both cuda-6.5 and cuda-7.5 for 64 bit Linux using the runfiles from nvidia's website.
This is a minor thing, but I had problems supplying the libcuda.so file. It was hiding in the cuda-x.y/lib64/stubs in cuda.buildinfo.
It would be nice if this subdirectory could be found automatically, so one could install the whole package from hackage.
Hello,
I recently came back to GPU programming and checked out what was new there. I've just read about CUDA 8 but saw that it's not yet supported by this library (not surprising given it's been out for a day =P). Any ongoing plans for supporting it or should I put a patch (that's admittedly quite a big one though I guess) together myself if I ever need CUDA8 features?
Cheers
The install fails with the following error:
phaazon@illusion /tmp
$ cabal update
Downloading the latest package list from hackage.haskell.org
phaazon@illusion /tmp
$ cabal install cuda
Resolving dependencies...
Configuring cuda-0.6.5.1...
Failed to install cuda-0.6.5.1
Build log ( C:\Users\phaazon\AppData\Roaming\cabal\logs\cuda-0.6.5.1.log ):
[1 of 1] Compiling Main ( C:\Users\phaazon\AppData\Local\Temp\cuda-0.6.5.1-13576\cuda-0.6.5.1\dist\setup\setup.hs, C:\Users\phaazon\AppData\Local\Temp\cuda-0.6.5.1-13576\cuda-0.6.5.1\dist\setup\Main.o )
Linking C:\Users\phaazon\AppData\Local\Temp\cuda-0.6.5.1-13576\cuda-0.6.5.1\dist\setup\setup.exe ...
Configuring cuda-0.6.5.1...
checking for gcc... gcc c:\Program Files\Haskell Platform\2014.2.0.0\mingw\bin\gcc.exe
checking whether the C compiler works... no
configure: error: in `/tmp/cuda-0.6.5.1-13576/cuda-0.6.5.1':
configure: error: C compiler cannot create executables
See `config.log' for more details
cabal.exe: Error: some packages failed to install:
cuda-0.6.5.1 failed during the configure step. The exception was:
ExitFailure 77
I am trying to install Haskell package cuda
from the git repository on Windows 8.1 with CUDA-8.0, because it is listed as a dependency for accelerate-llvm-ptx-1.0.0.1
(AccelerateHS/accelerate-llvm#23 (comment)). When I did cabal insall
inside the git local copy of Haskell cuda
repository, I got an error:
PS G:\DEVELOP\haskell-cuda> cabal install
Resolving dependencies...
Configuring cuda-0.8.0.0...
Building cuda-0.8.0.0...
Failed to install cuda-0.8.0.0
Build log ( C:\Users\Steven\AppData\Roaming\cabal\logs\cuda-0.8.0.0.log ):
Using build information from 'cuda.buildinfo.generated'.
Provide a 'cuda.buildinfo' file to override this behaviour.
Building cuda-0.8.0.0...
Preprocessing library cuda-0.8.0.0...
<no location info>: error:
module `Foreign.CUDA.Path' is a package module
cabal: Leaving directory '.'
cabal.exe: Error: some packages failed to install:
cuda-0.8.0.0 failed during the building phase. The exception was:
ExitFailure 1
Here is the information of the CUDA toolkit installed on my machine:
PS G:\DEVELOP\haskell-cuda> nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2016 NVIDIA Corporation
Built on Sat_Sep__3_19:05:48_CDT_2016
Cuda compilation tools, release 8.0, V8.0.44
I don't understand what the error means.
Hi,
I get the following error when trying to install the cuda package via cabal.
Preprocessing library cuda-0.6.5.0...
<command-line>: error: "1" may not appear in macro parameter list
<command-line>: warning: missing whitespace after the macro name
c2hs: Error during preprocessing custom header file
environment:
-bash-4.1$ ghc --version
The Glorious Glasgow Haskell Compilation System, version 7.8.3
-bash-4.1$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2014 NVIDIA Corporation
Built on Thu_Jul_17_21:41:27_CDT_2014
Cuda compilation tools, release 6.5, V6.5.12
Cheers
Hi,
When trying to install master
with GHC 7.8.3 I have the following error:
./Foreign/CUDA/Driver/Marshal.chs:165: (column 15) [ERROR] >>> Unknown identifier!
Cannot find a definition for `cuMemHostRegister' in the header file.
Here is the information about my Cuda install (fresh an ArchLinux):
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2014 NVIDIA Corporation
Built on Thu_Jul_17_21:41:27_CDT_2014
Cuda compilation tools, release 6.5, V6.5.12
Any idea?
Hi! I'm trying to get informationa about availiable memory, but I keep getting this error. Am I missing something?
module Main where
import qualified Foreign.CUDA.Driver.Marshal as DMarschal
import qualified Foreign.CUDA.Driver.Device as DDevice
main :: IO ()
main = do
DDevice.initialise []
print =<< DMarschal.getMemInfo
error:
Main.hs: CUDA Exception: invalid device context
C:\Users\Marko\Documents\Visual Studio 2015\Projects\Multi-armed Bandit Experiments\Spiral Haskell>cabal install cuda
Warning: The package list for 'hackage.haskell.org' is 40.4 days old.
Run 'cabal update' to get the latest list of available packages.
Resolving dependencies...
Configuring cuda-0.7.5.1...
Failed to install cuda-0.7.5.1
Build log ( C:\Users\Marko\AppData\Roaming\cabal\logs\cuda-0.7.5.1.log ):
[1 of 1] Compiling Main ( C:\Users\Marko\AppData\Local\Temp\cabal-tmp-5100\cuda-0.7.5.1\dist\setup\setup.hs, C:\Users\Marko\AppData\Local\Temp\cabal-tmp-5100\cuda-0.7.5.1\dist\setup\Main.o )
Linking C:\Users\Marko\AppData\Local\Temp\cabal-tmp-5100\cuda-0.7.5.1\dist\setup\setup.exe ...
Warning: cuda.cabal: Ignoring unknown section type: custom-setup
Configuring cuda-0.7.5.1...
Found CUDA toolkit at: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5
setup.exe: 'nm' exited with an error:
nm: 'C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\x64\cudart.lib':
No such file
cabal: Error: some packages failed to install:
cuda-0.7.5.1 failed during the configure step. The exception was:
ExitFailure 1
In fact the aforementioned directory C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\x64\
does not exist, instead the file is in C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v7.5\lib\x64
.
There is also one other bug, as I have multiple versions of Cuda SDK installed, at first it tried using the Cuda 8.0 which was in CUDA_PATH
. I had to change it to 7.5 manually. Since the package is not using the most up to date version at the moment, I would suggest using the CUDA_PATH_V7_5
variable to look for the default directory.
Hi, I'm getting the following error with Haskell cuda
and also accelerate-llvm-ptx
:
~ { awscli docker-machine } ❯ docker run --runtime=nvidia -it tmcdonell/accelerate-llvm
root@7d05fbb8d8d5:/opt/accelerate-llvm# nvidia-smi
Fri Sep 7 07:32:48 2018
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.44 Driver Version: 396.44 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... Off | 00000000:00:1E.0 Off | 0 |
| N/A 45C P0 42W / 300W | 0MiB / 16160MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
root@7d05fbb8d8d5:/opt/accelerate-llvm# stack install cuda
accelerate-llvm-ptx-1.3.0.0: unregistering (Dependency being unregistered: cuda-0.10.0.0)
cuda-0.10.0.0: unregistering (components added: exe:nvidia-device-query)
nvvm-0.8.0.3: unregistering (Dependency being unregistered: cuda-0.10.0.0)
cuda-0.10.0.0: build (lib + exe)
cuda-0.10.0.0: copy/register
Log files have been written to: /opt/accelerate-llvm/.stack-work/logs/
Copying from /opt/accelerate-llvm/.stack-work/install/x86_64-linux/lts-12.0/8.4.3/bin/nvidia-device-query to /root/.local/bin/nvidia-device-query
Copied executables to /root/.local/bin:
- nvidia-device-query
root@7d05fbb8d8d5:/opt/accelerate-llvm# nvidia-device-query
nvidia-device-query: Status.toEnum: Cannot match -1
CallStack (from HasCallStack):
error, called at src/Foreign/CUDA/Driver/Error.chs:372:22 in cuda-0.10.0.0-Lq313TS76CJ6ufZOzm0zPz:Foreign.CUDA.Driver.Error
Relevant information
root@7d05fbb8d8d5:/opt/accelerate-llvm# echo $CUDA_PKG_VERSION
9-2=9.2.148-1
root@7d05fbb8d8d5:/opt/accelerate-llvm# echo $CUDA_VERSION
9.2.148
root@7d05fbb8d8d5:/opt/accelerate-llvm# nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Tue_Jun_12_23:07:04_CDT_2018
Cuda compilation tools, release 9.2, V9.2.148
This happens also if I omit --runtime=nvidia
, and/or use tmcdonell/accelerate-llvm:stable
, and nvidia-smi
seems to work fine in Docker so I don't think it's a Docker issue.
When using ghc-7.8 under linux (test machine was Ubuntu 12.04), in the link step ghc decides to put the -lcuda
and -lcudart
flags right after the object files, rather than with the other libraries. This results in the linker failing with unknown symbols because -lcuda
appears before -lHScuda-x.x.x.x
which requires them. Appears to work fine in ghci.
It seems to work if we instead just use the extra-libraries
field of the cabal file.
I'd like to avoid installing CUDA using Nvidia's installer if I can help it, as I have a carefully crafted, working, bumblebee setup which I don't want to mess up.
Alas, no luck with the stock nvidia-cuda-toolkit
in ubuntu 16.04. Can someone who managed to make it work maybe give a workaround suggestion? While this issue looks similar to #54, I do not have any /usr/lib/cuda
directory, so I'm filing it as a separate issue in case it helps someone else.
$ dpkg -l "nvidia-cuda-toolkit" "llvm*" | grep "^ii"
ii llvm-3.8 1:3.8-2ubuntu1 amd64 Modular compiler and toolchain technologies
ii llvm-3.8-dev 1:3.8-2ubuntu1 amd64 Modular compiler and toolchain technologies, libraries and headers
ii llvm-3.8-runtime 1:3.8-2ubuntu1 amd64 Modular compiler and toolchain technologies, IR interpreter
ii nvidia-cuda-toolkit 7.5.18-0ubuntu1 amd64 NVIDIA CUDA development toolkit
$ cd accelerate-examples/
$ readlink stack.yaml
stack-8.6.yaml
$ git pull
Already up to date.
$ stack clean
$ stack build
cuda > configure
cuda > [1 of 2] Compiling Main ( /tmp/stack7469/cuda-0.10.1.0/Setup.hs, /tmp/stack7469/cuda-0.10.1.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/Main.o )
cuda > [2 of 2] Compiling StackSetupShim ( /home/lestephane/.stack/setup-exe-src/setup-shim-mPHDZzAJ.hs, /tmp/stack7469/cuda-0.10.1.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/StackSetupShim.o )
cuda > Linking /tmp/stack7469/cuda-0.10.1.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/setup ...
cuda > Configuring cuda-0.10.1.0...
cuda > Found CUDA toolkit at: /usr
cuda > setup: Could not find path: ["/usr/lib64"]
cuda >
-- While building package cuda-0.10.1.0 using:
/tmp/stack7469/cuda-0.10.1.0/.stack-work/dist/x86_64-linux/Cabal-2.4.0.1/setup/setup --builddir=.stack-work/dist/x86_64-linux/Cabal-2.4.0.1 configure --user --package-db=clear --package-db=global --package-db=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/pkgdb --libdir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/lib --bindir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/bin --datadir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/share --libexecdir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/libexec --sysconfdir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/etc --docdir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/doc/cuda-0.10.1.0 --htmldir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/doc/cuda-0.10.1.0 --haddockdir=/home/lestephane/.stack/snapshots/x86_64-linux/aa272c68412dcda5f988dca48757e9c971bc29607f65b0a3e16436e1989f584d/8.6.5/doc/cuda-0.10.1.0 --dependency=Cabal=Cabal-2.4.1.0-4t2ut7bCQNuEj8DDES6BZk --dependency=base=base-4.12.0.0 --dependency=bytestring=bytestring-0.10.8.2 --dependency=directory=directory-1.3.3.0 --dependency=filepath=filepath-1.4.2.1 --dependency=pretty=pretty-1.1.3.6 --dependency=template-haskell=template-haskell-2.14.0.0 --dependency=uuid-types=uuid-types-1.0.3-Autqzm2g4auIYSV6nkCRLV --exact-configuration --ghc-option=-fhide-source-paths
Process exited with code: ExitFailure 1
Progress 1/6
$ dpkg -L nvidia-cuda-toolkit
/.
/etc
/etc/nvcc.profile
/usr
/usr/bin
/usr/bin/nvdisasm
/usr/bin/nvcc
/usr/bin/nvlink
/usr/bin/bin2c
/usr/bin/filehash
/usr/bin/fatbinary
/usr/bin/cudafe
/usr/bin/cuobjdump
/usr/bin/cudafe++
/usr/bin/cuda-memcheck
/usr/bin/nvprune
/usr/bin/ptxas
/usr/lib
/usr/lib/nvidia-cuda-toolkit
/usr/lib/nvidia-cuda-toolkit/bin
/usr/lib/nvidia-cuda-toolkit/bin/g++
/usr/lib/nvidia-cuda-toolkit/bin/crt
/usr/lib/nvidia-cuda-toolkit/bin/crt/prelink.stub
/usr/lib/nvidia-cuda-toolkit/bin/crt/link.stub
/usr/lib/nvidia-cuda-toolkit/bin/nvcc
/usr/lib/nvidia-cuda-toolkit/bin/gcc
/usr/lib/nvidia-cuda-toolkit/bin/cicc
/usr/lib/nvidia-cuda-toolkit/libdevice
/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.compute_30.10.bc
/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.compute_35.10.bc
/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.compute_20.10.bc
/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.compute_50.10.bc
/usr/include
/usr/include/nvvm.h
/usr/share
/usr/share/lintian
/usr/share/lintian/overrides
/usr/share/lintian/overrides/nvidia-cuda-toolkit
/usr/share/doc
/usr/share/doc/nvidia-cuda-toolkit
/usr/share/doc/nvidia-cuda-toolkit/copyright
/usr/share/doc/nvidia-cuda-toolkit/README.Debian
/usr/share/man
/usr/share/man/man1
/usr/share/man/man1/cuda-binaries.1.gz
/usr/lib/nvidia-cuda-toolkit/bin/nvcc.profile
/usr/share/doc/nvidia-cuda-toolkit/changelog.Debian.gz
/usr/share/man/man1/cuobjdump.1.gz
/usr/share/man/man1/nvdisasm.1.gz
/usr/share/man/man1/cuda-memcheck.1.gz
/usr/share/man/man1/nvcc.1.gz
/usr/share/man/man1/nvprune.1.gz
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.