Coder Social home page Coder Social logo

Comments (3)

zhengqili avatar zhengqili commented on June 28, 2024

Hi,

in terms of using a depth map to generate novel views, I used the idea from https://arxiv.org/pdf/2004.01294.pdf to warp contents of static regions from neighboring frames and warp contents of the dynamic region from reference view into a novel view through point cloud splatting, with the code from https://github.com/sniklaus/3d-ken-burns.

Another way to achieve it, which usually provides me with a better rendering result, is to create a textured mesh from point clouds and input pixels, followed by rasterization to render novel views. I used the implementation from https://github.com/vt-vl-lab/3d-photo-inpainting for a baseline of 3D photos. This approach usually produces a much better result (especially for disocculusion) from RGBD images.

from neural-scene-flow-fields.

yaseryacoob avatar yaseryacoob commented on June 28, 2024

Thanks for the details, I will look into them. Let me clarify

  1. I am trying to test this hypothesis within your framework: How the depth estimate affects the highest quality rendering. When you compared to CVD, I assume you switched their depth maps into yours (but you seem to also not do inpainting, if I interpret the videos correctly).
  2. Maybe your comparison is different to CVD, but I can't tell from the online page. If you are willing to share the code that took in the CVD depth and generated the video it will save me guessing how the comparison was done.

from neural-scene-flow-fields.

zhengqili avatar zhengqili commented on June 28, 2024

Hi,

In comparison to CVD, I am not using CVD depth in our framework, but using CVD depth to perform traditional depth-based image-based rendering based on the method described in Sec 3.2 of https://arxiv.org/pdf/2004.01294.pdf . (Unfortunately, I kept this implementation at Adobe Research private repo, but I am no longer at Adobe. )

I don't think CVD would work for a general dynamic scene since it minimizes epipolar consistency within the entire scene without taking into consideration the object motion or ignoring moving objects, and this would cause incorrect depth results for moving objects, from my previous experiments. Thus, I think single-view depth is still the best initialization strategy in our case.

from neural-scene-flow-fields.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.