Coder Social home page Coder Social logo

Comments (15)

HuguesTHOMAS avatar HuguesTHOMAS commented on July 17, 2024 1

Hi,

I had some time to dig into this problem and it seems that CUDA10 is not working correctly with RTX 2080Ti GPUs. Here is what I found:

Tested configurations

  • CUDA9-TF1.12 / GTX 1080ti => No bug
  • CUDA10-TF1.13 / GTX 1080ti => No bug
  • CUDA9-TF1.12 / RTX 2080ti => No bug
  • CUDA10-TF1.13 / RTX 2080ti => Bug appears only in this configuration

Origin of the bug
I tracked down the NaN values in my code and found that they appear after a tf.matmul operation:

weighted_features = tf.matmul(all_weights, neighborhood_features)

Before the appearance of NaN I noticed some weird value higher than 1e10. If you print the two matrix that are multiplied and the result matrix, you will see that the result is completly false. This seems to be caused by a CUDA internal bug. At some point one of this mistake lead to a value so high that it becomes NaN and the network crashes.

For now I would just advise to avoid using CUDA10 with a RTX 2080ti.

from kpconv.

HuguesTHOMAS avatar HuguesTHOMAS commented on July 17, 2024

Hi @XuyangBai,

The first error is quite strange and I never encountered such behavior on my datasets. It is very unlikely that the loss really became zero if you use correct augmentation strategies. It seems more like a bug, but it will be difficult to help you without reproducing your experiments with your dataset.

The second one could be explained by your dataset. The GPU memory that you see from nvidia-smi is the memory taken by the tensors at runtime. It thus depends on the input size. If you use the same network parameters, but with a different dataset, with denser point cloud for example, this memory will be larger. It is strange that you get different values GPU memory for the same dataset, but might be explained by the nature of your dataset and your implementation of the dataset class.

What does your data look like? Real or artificial point clouds? Indoor outdour scenes? Objects?

from kpconv.

XuyangBai avatar XuyangBai commented on July 17, 2024

Hi @HuguesTHOMAS ,

I also think the first error may because some bugs of my implementation. I just want to check whether it is because the tensorflow version. BTW, what's the situation when using TF 1.13 and CUDA 10 ?

For the second one, I mean that for some experiments, the GPU memory is always 4400MB and for others is always 7000+MB. It is really strange. But I check the training.txt in result fold and found the memory showed in that file is similar.

from kpconv.

XuyangBai avatar XuyangBai commented on July 17, 2024

Oh Sorry I think I find the reason of second problem. I forget the dropout. It seems when I use dropout = 0.5, the GPU memory is around 4400 MB while using dropout = 1, the GPU memory is 7000MB.

Sorry for the bothering.

from kpconv.

HuguesTHOMAS avatar HuguesTHOMAS commented on July 17, 2024

Hi @XuyangBai,

If you look at the code, the dropout variable is extremely important in the implementation, because the network use it to know if you use it for training or for test.

If you use a dropout < 0.99, the network is in training configuration, and if you use a dropout = 1, the network is in test configuration. This is a trick that I used to avoid creating a 'training/test' boolean placeholder, and that I never corrected.

It will be corrected in the next month (I currently don't have any time to spend on the code). Until then, you should not use dropout = 1 when training, as the variables will not be updated by gradient back propagation in that case. If you have dropout blocks and don't want to use them, just remove them or use dropout = 0.98 and they will be insignificant.

Best,
Hugues

from kpconv.

XuyangBai avatar XuyangBai commented on July 17, 2024

Thanks a lot for your reply :)

from kpconv.

XuyangBai avatar XuyangBai commented on July 17, 2024

Hi @HuguesTHOMAS

I reopen this issue because I go through the code to check out the bug but I am still facing the first problem that training will broke up (loss become zero). Actually, everything goes well for environment CUDA 9, TF 1.12.0. But when I tried to run the code on my RTX 2080Ti with CUDA 10, TF 1.12.0 (build from source), the problem will appear. And I printed the input value and variables for when the model broken and found the input value is OK but all the variables all NaN.

I have tried to add 1e-12 to operation like tf.sqrt to avoid infinite number but the problem is still there.

Another thing need to mention is that sometimes the training and validation both 'broke up' while sometimes the training seems correct but the validation is broken up, like the curve below.
屏幕快照 2019-08-03 下午7 23 55

Have you ever met such problems like the variables become all NaN? Thank you so much.

from kpconv.

nejcd avatar nejcd commented on July 17, 2024

Hi @XuyangBai, I have noticed similar behaviour as you have described. I have not been able to debug it as it occurs randomly.
As you have closed the issue, have you found some solution?

from kpconv.

XuyangBai avatar XuyangBai commented on July 17, 2024

Hi @nejcd , I didn't find the solution, so I just change my environment to CUDA9 TF 1.12.0 and everything goes well. There might be some bugs in CUDA10.

from kpconv.

miiller avatar miiller commented on July 17, 2024

I ported the model to Keras layers and tried training it on a Tesla V100 GPU (CUDA10.2:tf2.0) with the result of also getting NaN values after some epochs. After changing the KP influence from guassian to linear everything worked fine, so I would assume the issue lies in the gradient computation for the Gaussian influence, although increasing the epsilon from 1e-9 to 1e-6 did not resolve the problem. But the linear influence works just fine and in my case leads to good results with higher computational efficiency.

from kpconv.

longmalongma avatar longmalongma commented on July 17, 2024

after a tf.matmul operation:

Thanks for your great work,I have met this problem for a long time,i want to konw the version of python.

from kpconv.

HuguesTHOMAS avatar HuguesTHOMAS commented on July 17, 2024

If I remember correctly, the python version was 3.5 or 3.6. IF you are willing to switch libraries, a newer implementation has been released in PyTorch.

from kpconv.

Arjun-NA avatar Arjun-NA commented on July 17, 2024

Just to mention my experience:

I got NaN when I used Tensorflow 1.15, cuda 10 cudnn 7.6.5
with some specific configurations only
with GPU : NVIDIA Tesla P100

from kpconv.

densechen avatar densechen commented on July 17, 2024

I use TF 1.15 and also occur NaN. You can solve this via reducing the batch size to 2.

from kpconv.

wuqianliang avatar wuqianliang commented on July 17, 2024

Nice.

from kpconv.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.