Comments (3)
Additionally, we also tried to use with_gpu=False
for glt_dataset.init_node_features
, and it throws an IndexError
instead of the above CUDA failure when trying to load the first batch of the train dataloader. To reproduce this issue, we follow the same bash command above (using 2 GPUs by default), and update line 250-258 of the above min rep code to:
glt_dataset.init_node_features(
node_feature_data=igbh_dataset.feat_dict,
with_gpu=False,
# split_ratio=0.15 * min(num_gpus, 4),
# device_group_list=[
# glt.data.DeviceGroup(idx, group)
# for idx, group in enumerate(gpu_groups)
# ]
)
And the following error is thrown:
Traceback (most recent call last):
File "min_rep.py", line 158, in <module>
torch.multiprocessing.spawn(run,
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 240, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method='spawn')
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
while not context.join():
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 160, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 1 terminated with the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
fn(i, *args)
File "/workspace/repository/min_rep.py", line 75, in run
for batch in train_loader:
File "/usr/local/lib/python3.8/dist-packages/graphlearn_torch/loader/neighbor_loader.py", line 104, in __next__
result = self._collate_fn(out)
File "/usr/local/lib/python3.8/dist-packages/graphlearn_torch/loader/node_loader.py", line 99, in _collate_fn
x_dict = {ntype : self.data.get_node_feature(ntype)[ids] for ntype, ids in sampler_out.node.items()}
File "/usr/local/lib/python3.8/dist-packages/graphlearn_torch/loader/node_loader.py", line 99, in <dictcomp>
x_dict = {ntype : self.data.get_node_feature(ntype)[ids] for ntype, ids in sampler_out.node.items()}
File "/usr/local/lib/python3.8/dist-packages/graphlearn_torch/data/feature.py", line 145, in __getitem__
return self.cpu_get(ids)
File "/usr/local/lib/python3.8/dist-packages/graphlearn_torch/data/feature.py", line 163, in cpu_get
return self.feature_tensor[ids]
IndexError: index 1004440 is out of bounds for dimension 0 with size 1000000
We have checked the train indices and they are all within 1000000 (min 0, max 599999), so we're not sure where is 1004440 index being yielded.
from graphlearn-for-pytorch.
I think this has been solved by #62 . Would you try it again?
from graphlearn-for-pytorch.
We have fixed the bug in #62 , and added a single-node multi-gpu example.
from graphlearn-for-pytorch.
Related Issues (20)
- Cannot install from pip HOT 4
- Error handling in distributed training
- Cannot build from source for C++ operations HOT 5
- Examples have fixed default number of processes HOT 4
- RPC Failure after 1st epoch of training on IGBH-tiny and IGBH-small HOT 4
- AttributeError: module 'graphlearn_torch.py_graphlearn_torch' has no attribute 'SampleQueue' HOT 1
- Crashed when running distributed CPU training using 2 nodes HOT 3
- Why CUDA Graph not enabled in training process?
- [Doc] User Guide on Alibaba Cloud
- [Feat] Distributed Sparse Backend
- [Feat] Remove redundancy in storage and computation caused by NeighborLoader and new model implementation in PyG.
- does not support pyg sampler for hetero-graph HOT 2
- index out of bounds for partition book List HOT 13
- Figure out where the `None` is from
- [Bug] CUDA failure: 'invalid configuration argument' when batch_size is 1 or 2 HOT 1
- [Bug] some progress may hang at global_barrier when initializing the torch rpc cluster
- Add supports for weighted edge sampling HOT 1
- Add supports for weighted edge sampling HOT 1
- Mathematical inequivalence introduced by GLT Sampler vs. DGL Sampler? HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from graphlearn-for-pytorch.