Coder Social home page Coder Social logo

Comments (5)

HuguesTHOMAS avatar HuguesTHOMAS commented on August 16, 2024

Hi @LuciusPennyworth,

Thank you for your interest in this my work, I will try to clarify each point:

  1. Exactly. Each line of neighbors_indices point to the neighbors of each convolution location. As each convolution has its own number of neighbors, we add indices that point to the shadow_point .

  2. Yes, if you increase first_subsampling_dl, the dataset class will prepare point clouds that are sampled with a larger grid, meaning fewer points overall. This is true for model datasets, but for scene segmentation you also have to take into consideration in_radius, that will decide the size of the input sphere.

  3. During validation, we only test a few examples in the test set. C1 is the confusion of these examples (so not the entire test set). We also keep the probabilities from validation of previous epochs to get a voting score on the whole test set, which is C3

from kpconv.

LuciusPennyworth avatar LuciusPennyworth commented on August 16, 2024

Hi! @HuguesTHOMAS Thanks for your timely reply which is useful to me.
Since last discussion, I modified your code to imply some of my ideas. During this, I encountered some bugs and raised some questions.

  1. After modifing your code and training, the self.regularization_loss ( from KPCNN_model.py)of the model keeps going up and finally becomes NaN.Do you have any idea of this kind of bug before?
  2. I notice that ,in the decoder part of KP-FCNN, each feature map in the same layer have different feature dimensions.Could you explain why those pairs of feature map have different shapes and what does the first four layers pass through the skip connection.
    Thanks a lot and wish you have a wonderful weekend!

from kpconv.

HuguesTHOMAS avatar HuguesTHOMAS commented on August 16, 2024

Hi @LuciusPennyworth,

  1. This is a strange and definitely erroneous behaviour. The regularization should be going down. This means that some weights in your network are exploding yu should try to track down which ones to find the bug. What did you modify in the code?

  2. In the decoder part, a layer is built with three blocks. First an upsampling block projects the features of the current layer to the points of the upper layer. It is performed by nearest interpolation. Second, a concatenation block that concatenate these interpolated features (lets call their dimension 2D), with the features from the corresponding layer in the encoder part (whose dimension is thus D). Therefore we have a feature map of dimension 3D. Third, a 1conv transforms these feature so that their dimension become D. Eventually we end up with a feature map of dimension D which is half of the original feature map (2D). We can continue the decoder from this point and the dimension of the features will be halved at every layer (in the same way that it is doubled at each layer of the encoder).

Best,
Hugues

from kpconv.

LuciusPennyworth avatar LuciusPennyworth commented on August 16, 2024

Hi @HuguesTHOMAS ,

  1. I try to extend your idea about deform by adding deform to input point cloud.I had solved the NaN problem.Thanks for your help.

archi
According to your description,the features dim of the "D1-1" layer is 1024+512=1536. After a 1conv transforms ,the features dim of "D1-2" layer become to 512,which equals to the E4 layer. In addition, the "E4" layer has the same number of points with "D1". Is my understanding correct?
3. For the visualize_deformations.py , I use MN40 dataset to make visualize but get this error message.

qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (8 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 8 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 8 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
qt.glx: qglx_findConfig: Failed to finding matching FBConfig (1 1 1 0)
ERROR: In /work/standalone-x64-build/VTK-source/Rendering/OpenGL2/vtkOpenGLRenderWindow.cxx, line 797
vtkXOpenGLRenderWindow (0x559c336fd4e0):
 GL version 2.1 with the gpu_shader4 extension is not supported by your graphics driver but is required for the new OpenGL rendering backend. 
Please update your OpenGL driver. 
If you are using Mesa please make sure you have version 10.6.5 or later and make sure your driver in Mesa supports OpenGL 3.2.

I using Ubuntu 16.04,PyQt5 version is 5.13.1, mayavi version is 4.7.1, vtk version is 8.1.2. Are this bug caused by the software version?

Thanks in advance!

from kpconv.

HuguesTHOMAS avatar HuguesTHOMAS commented on August 16, 2024

Hi @LuciusPennyworth,

  1. Your understanding is correct and this is what would happen if we used normal convolution blocks. However, we use resnet blocks as shown below. The output features of these blocks are two times bigger than the normal output features (2D instead of D as shown in the figure). As a result, the feature dimensions are as such in the current code:
    E4 : 1024
    D1-1 : 2048+1024=3072
    D1-2 : 512

image

  1. I have no idea of what is causing this bug. I have nearly the same software versions (vtk 8.1.2 / PyQt5 5.12.1 / mayavi 4.6.2) and it works fine on ModelNet40.

from kpconv.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.