Coder Social home page Coder Social logo

juandugit / dh3d Goto Github PK

View Code? Open in Web Editor NEW
155.0 155.0 17.0 126.21 MB

DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DOF Relocalization

Home Page: https://vision.in.tum.de/research/vslam/dh3d

License: Apache License 2.0

Python 42.81% MATLAB 6.10% Shell 0.75% C++ 39.53% Cuda 7.57% CMake 3.24%
autonomous-driving deep-learning feature-learning point-cloud relocalization

dh3d's People

Contributors

juandugit avatar rui2016 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

dh3d's Issues

Evaluation results are different from the paper

Hi JuanDuGit,

I run the test code to compute the recall but it’s recall performance is different from the paper.

According to the DH3D paper, recall @1 and @1% are 74.16, 85.30.

However, when I ran the globaldesc_extract.py with your pertained model which is in model/global/xxx ,
I got the following results:

Avg_recall :
1 : 0.7532
2 : 0.8284
3 : 0.8624

Avg_one_percent_retrieved :
0.8849

I would very much appreciate it if you could give me an explanation about these results. Thank you.

Best,
Ganghee

Question about model runtime and relocalization process

I am curious to hear about the runtime and hardware that you used. You have stated in the paper that for a point cloud with 8192 points, one forward pass took 80ms but I did not see any hardware specifications, or information about the training time, hardware and parameters.
Another question concerns the point-cloud-based relocalization process itself. I know about implementing image-based relocalization, and will be very glad to hear how you are implementing the point-cloud-based relocalization.

Much appreciated! Thanks!

Point Cloud Retrieval for DSO

Hello,

did you also validate the point cloud retrieval for DSO? As far as I can see in the paper for DSO only the point cloud registration is validated.

Regards

local feature detector

According to the paper, there are two the matchability score maps (S and S') in local feature detector.

image

But I found out that there is no use of score map S' in the detector loss function.
image

And in the source code, you calculated the score but didn't use.
image

Is the score S' unnecessary?

Error running code

0%| |0/2073[00:00<?,?it/s] 2021-05-20 10:48:44.249488: E tensorflow/stream_executor/cuda/cuda_blas.cc:647] failed to run cuBLAS routine cublasSgemmBatched: CUBLAS_STATUS_EXECUTION_FAILED
2021-05-20 10:48:44.249528: E tensorflow/stream_executor/cuda/cuda_blas.cc:2505] Internal: failed BLAS call, see log for details
2021-05-20 10:48:44.887998: I tensorflow/stream_executor/stream.cc:4817] stream 0x55c2525213f0 did not memzero GPU location; source: 0x7f1da27fab50
2021-05-20 10:48:44.888031: I tensorflow/stream_executor/stream.cc:4817] stream 0x55c2525213f0 did not memzero GPU location; source: 0x7f1da27fab70
2021-05-20 10:48:44.888071: E tensorflow/stream_executor/cuda/cuda_dnn.cc:2833] failed to enqueue forward batch normalization on stream: CUDNN_STATUS_EXECUTION_FAILED

InternalError (see above for traceback): Blas xGEMMBatched launch failed : a.shape=[10,512,3], b.shape=[10,3,3], m=512, n=3, k=3, batch_size=10
[[Node: MatMul = BatchMatMul[T=DT_FLOAT, adj_x=false, adj_y=false, _device="/job:localhost/replica:0/task:0/device:GPU:0"](split, _arg_R_0_0/_17)]]

I used cuda9.0 and tensorflow1.9, ubuntu18.04,Can you take a look at it for me?

Why are there two separate models for local/global descriptor?

Thanks for your attention
As stated in the title, I found two model files included in the source directory. One is labelled local and the other global. I know the network is trained in three stages but if I am going to calculate local and global descriptors (also salient points) in a single forward pass, which one should I use? I.e. are both files complete models or only the global one is complete?

Input & output node names

Hi, thank you for the great work!

I intend to modify the code and run it using the tf.Session equivalent in Tensorflow C++ API.

From the function compute_local in model.py, I see that the pointcloud input for the input dict is points, which is derived from

points = tf.concat(pcdset, 0, name='pointclouds')

in the function build_graph.

However, when I import the meta graph and list the nodes, I am unable to find any node named pointclouds.

May I know how I can access it and similarly the output node names when I run my session? Apologies if my description sounds confusing as I am quite new to this.

Change the number of input points

Thanks for the great work

I have a question regarding network structure: is it possible to change to input point number without re-training the network from scratch? Thanks

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.