Coder Social home page Coder Social logo

Comments (12)

ili3p avatar ili3p commented on July 27, 2024 3

Exactly. That means there is no point to run one model on two gpus.

There are only two reasons to run a model on multiple gpus.

  1. Model is too big to fit in gpu memory i.e. You dont want to use too small batch size so you split the batch and run each part on separate gpu. This is what torch/pytorch DataParallel is doing.

  2. There is gpu computational bottleneck so you want to split the computations across multiple gpus.

In your case 1. Doesn't apply, and 2. Doesnt make sense if you are going to run two models on the same two gpus. It's always better to run them on one separate gpu. Running a model on multiple gpus comes at a cost.

Btw there is no data loading bottleneck. You should store the image features as compressed numpy array. Hdf5 is not made for this use case. See MCB code on how you can store the features as compressed numpy. For some reason the caffe resnet model outputs more sparse image features that can be compressed really well. So vqa2 trainging set only takes 19gb and 9gb for val set. You can cache 28gb in 64gb ram easily. Even if you use visualgenome dataset since half of the vg images are from coco i.e. The same as vqa2.

And finally the stack trace you give above says keyboard interrupt i.e. someone pressed ctrl+c.

from vqa.pytorch.

Cadene avatar Cadene commented on July 27, 2024 1

@ilija139 It was not clear to me that caching the features in RAM/tmpfs was better than using hdf5 which is known to be designed for efficient I/O and for high volume. It seems that I was wrong. Unfortunately, I neither have time to add this feature for now.

Thanks for your answer.

from vqa.pytorch.

Cadene avatar Cadene commented on July 27, 2024 1

I never experimented such a thing with the multi-GPU setup. However, it was more efficient for me to run one experiment per GPU.

I let the issue open just in case someone else encounter the same issue, but I will edit the title.

from vqa.pytorch.

ili3p avatar ili3p commented on July 27, 2024 1

@ahmedmagdiosman If the only thing you changed was to run the models on separate GPUs then this is definitely not dataloader's limitation. Also changing the number of workers didn't solve the problem. So it's definitely pytorch's, or more specifically CUDA's. It's a known problem when you run multiple processes on one GPU device. It just gets worse when you run two models on two gpus at the same time...

And about the caching idea, it will still help if you use the compressed numpy features instead of hdf5, no matter the amount of RAM you have. 10 times fewer data to read so 10 times faster I/O.

from vqa.pytorch.

ahmedmagdiosman avatar ahmedmagdiosman commented on July 27, 2024 1

@ilija139 it seems there's already an issue on the pytorch page
pytorch/pytorch#2245

As for the data, actually I already have the compressed numpy features but I haven't gotten into integrating them with this project yet. I noticed that I didn't have this disk bottlenecking with the numpy features from MCB.

Thanks you both for your comments!

from vqa.pytorch.

Cadene avatar Cadene commented on July 27, 2024

It seems weird to me.

How many threads (workers) are you using ? What is your batch size ? Do you run one model per GPU ? Are you loading data from a dedicated SSD ?

Did you try to debug with --workers 0 to be sure you receive the trace from the loading data functions ?

from vqa.pytorch.

ahmedmagdiosman avatar ahmedmagdiosman commented on July 27, 2024

I'm using the default parameters (--workers 2, batch_size 128). I am running models on 2 GPUs CUDA_VISIBLE_DEVICES=0,2, i.e both models share the same 2 GPUs.
Data is split on a non-dedicated SSD (VQA) and a fast HDD (Visual Genome with soft link): Total disk read is around 300 MB/s for one model

CPU: 10-core Xeon
GPUs: 2x TITAN X

I tried running one model with --workers 0 while keeping the other at 2. Same problem. HOWEVER, with both running --workers 0 it seemed to work for 3 iterations and then it froze again. Still no stack trace 😢

EDIT Apparently there's a Python bug that causes CTRL-C not to be registered:
https://stackoverflow.com/questions/1408356/keyboard-interrupts-with-pythons-multiprocessing-pool

EDIT2 Running only one model with --workers 0 terminates with the following stack trace

  File "train.py", line 370, in <module>
    main()
  File "train.py", line 216, in main
    exp_logger, epoch, args.print_freq)
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/engine.py", line 12, in train
    for i, sample in enumerate(loader):
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/dataloader.py", line 166, in __next__
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/aosman/vqa/vqa.pytorch/vqa/lib/dataloader.py", line 166, in <listcomp>
    batch = self.collate_fn([self.dataset[i] for i in indices])
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/vqa.py", line 223, in __getitem__
    item = self.dataset_vgenome[index - len(self.dataset_vqa)]
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/vgenome.py", line 46, in __getitem__
    item_img = self.dataset_img.get_by_name(item_qa['image_name'])
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 66, in get_by_name
    return self[index]
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 37, in __getitem__
    item['visual'] = self.get_features(index)
  File "/home/aosman/vqa/vqa.pytorch/vqa/datasets/features.py", line 42, in get_features
    return torch.Tensor(self.dataset_features[index])
  File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper (/tmp/pip-s_7obrrg-build/h5py/_objects.c:2840)
  File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper (/tmp/pip-s_7obrrg-build/h5py/_objects.c:2798)
  File "/home/aosman/miniconda2/envs/vqa/lib/python3.6/site-packages/h5py/_hl/dataset.py", line 494, in __getitem__
    self.id.read(mspace, fspace, arr, mtype, dxpl=self._dxpl)
KeyboardInterrupt

from vqa.pytorch.

ili3p avatar ili3p commented on July 27, 2024

Try running the models on separate gpus. There is no point to running them on 2 gpus if they share them

from vqa.pytorch.

ahmedmagdiosman avatar ahmedmagdiosman commented on July 27, 2024

@ilija139 There is a point. There's a disk bottleneck not a computational bottleneck.

from vqa.pytorch.

Cadene avatar Cadene commented on July 27, 2024

@ilija139 I am curious about what you've just said about hdf5. What is the use case of hdf5 in your point of view? Also a pull request to be able to use the numpy files from MCB code would be greatly appreciated :)

Thanks

from vqa.pytorch.

ili3p avatar ili3p commented on July 27, 2024

@Cadene hdf5 = hierarchical data format, where is the hierarchy in this case? Also not sure how well the OS caches data read from hdf5 in RAM. If you use files to store the data, then you can either move them to tmpfs or let the OS cache them in RAM.
Anyway, there is no point in discussing since you need almost 10 times more space so clearly it's not a good choice.

I don't have time to properly modify your code to use the numpy features. But the modifications are trivial and you can see how I did it here: https://github.com/ilija139/vqa.pytorch/tree/numpy_features

And for how to obtain the numpy features https://github.com/akirafukui/vqa-mcb/blob/master/preprocess/extract_resnet.py

Note that, like I already said, for some reason the caffe resnet model outputs features that can be compressed a lot better than torch resnet features. The difference is about 2-3 times smaller files.

from vqa.pytorch.

ahmedmagdiosman avatar ahmedmagdiosman commented on July 27, 2024

@ilija139 derp, this is what happens when I don't sleep 🤦‍♂️
You're right, I have no idea why did this make sense to me.

Thanks for that idea about caching the data, however, I don't have that much ram for this to work.

@Cadene I tried running training for 2 models, this time with 1 model per GPU. And it works! No issues so far. So I think there's some kind of semaphore/locking mechanism causing both dataloaders to fight when GPUs are shared. I have no idea if this the dataloader's limitation or pytorch honestly.

from vqa.pytorch.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.