Coder Social home page Coder Social logo

Comments (10)

peterwittek avatar peterwittek commented on July 18, 2024

Can you run deviceQuery from CUDA-SDK without errors?

from somoclu.

shomedas avatar shomedas commented on July 18, 2024

~/softwares/cuda/NVIDIA_CUDA-6.5_Samples/bin/x86_64/linux/release$ ./deviceQuery
./deviceQuery Starting...

CUDA Device Query (Runtime API) version (CUDART static linking)

Detected 1 CUDA Capable device(s)

Device 0: "GeForce GT 740"
CUDA Driver Version / Runtime Version 6.5 / 6.5
CUDA Capability Major/Minor version number: 3.0
Total amount of global memory: 2047 MBytes (2146762752 bytes)
( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores
GPU Clock rate: 993 MHz (0.99 GHz)
Memory Clock rate: 2500 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 262144 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >

deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GeForce GT 740
Result = PASS

from somoclu.

shomedas avatar shomedas commented on July 18, 2024

when i downsized the data to 10000 lines it ran properly.
somoclu -k 1 --rows 20 --columns 20 temp_data/temp3.txt temp_data/temp3
nVectors: 10000 nVectorsPerRank: 10000 nDimensions: 1
Done!
Saving best matching units temp_data/temp3.bm
Saving Codebook temp_data/temp3.wts

from somoclu.

peterwittek avatar peterwittek commented on July 18, 2024

The output of deviceQuery is fine. Something is wrong with your input file. Somoclu only finds one dimension, which means it cannot parse the file correctly. Are you sure the separator is space?

from somoclu.

shomedas avatar shomedas commented on July 18, 2024

my sincere apologies. it was a csv file(comma separated).
But the error still persists with a correct file format. plz see below

$ somoclu -k 1 --rows 50 --columns 50 temp_data/data_new.txt temp_data/data_new
nVectors: 1000000 nVectorsPerRank: 1000000 nDimensions: 4
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Aborted (core dumped)

few lines of the data

0.00029265 0.0041885 0.0028948 0.90801
0.0013115 -0.01676 0.016237 0.99022
0.0021263 -0.025185 0.029451 0.9494
0.0025654 -0.025146 0.021604 0.98621
0.0028981 -0.02921 0.018316 0.95469
0.0037457 -0.030397 0.017167 0.90916
0.0049089 -0.03301 0.031392 0.99633
0.0054892 -0.018812 0.026261 0.99153
0.0039269 -0.0037259 -0.0020158 0.71632
0.0050334 -0.024387 0.026637 0.94973
0.0074461 -0.012099 0.019255 0.98008
0.004509 0.017106 -0.0039953 0.97827
0.0099971 -0.0066184 0.033406 0.93786
0.0071395 0.0035909 -0.026866 0.74404
0.010354 -0.011196 0.024886 0.9814
0.007964 0.010233 -0.015688 0.9846
0.0097519 0.00038955 -0.018524 0.64318
0.0087084 -0.0029076 0.0012271 0.96746

from somoclu.

shomedas avatar shomedas commented on July 18, 2024

it worked again when i downsized the data.
somoclu -k 1 --rows 50 --columns 50 temp_data/data_new_small.txt temp_data/data_new_small
nVectors: 10000 nVectorsPerRank: 10000 nDimensions: 4
Done!
Saving best matching units temp_data/data_new_small.bm
Saving Codebook temp_data/data_new_small.wts

from somoclu.

peterwittek avatar peterwittek commented on July 18, 2024

Now I remember. We had this problem before. The issue here is that you have a very large number of low-dimensional data points, whereas Somoclu was designed for large-dimensional spaces. It allocates an array on the GPU which is rows x columns x nVectorsPerRank. With single-precision floating point numbers, your case would come to 9 GByte. This is the reason for the bad alloc.

You have three ways to get around this:

  • Train the map on batches of your data. Train some initial part that fits the memory, then continue training the same map with the next batch. The tricky bit here is getting the learning parameters right in the continuation of the training. See this for a general idea.
  • Buy more GPUs. Convenient but expensive option.
  • Contribute to Somoclu by serializing the GPU code if the requested data does not fit the memory. This is the neatest but most time-consuming solution.

from somoclu.

shomedas avatar shomedas commented on July 18, 2024

what number do you imply by high dimensional dataspace. i can pre-process my data accordingly. i had actually reduced the dimensions before i tried somoclu.
i do have access to a titan-x with 12Gb RAm. will dat solve the probem.

if i train in batches the idea is to carry the learnt parameters as initialisation in the next iteration?

from somoclu.

peterwittek avatar peterwittek commented on July 18, 2024

I normally work on some ten thousand instances with a few thousand dimensions each. The most relevant learning parameters are the radius and the learning rate. These take some fiddling to get right if you continue the training with subsequent batches.

The Titan-X is an option, or if you can get access to a GPU cluster, then you can scale up your calculations to an arbitrary number of data points (depending on the size of the GPU cluster). The nVectorsPerRank tells you how many data points are assigned to each GPU. For instance, with 5 of your current GPU, you can easily tackle your problem.

from somoclu.

peterwittek avatar peterwittek commented on July 18, 2024

Note that multiple GPUs are only supported from the command line interface. MATLAB and the other interfaces can only use a single GPU.

from somoclu.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.